diff --git a/17/paper.pdf b/17/paper.pdf new file mode 100644 index 0000000000000000000000000000000000000000..29b070ca439295d0c62948f02c5205a1603a47c6 --- /dev/null +++ b/17/paper.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:c8b0fc065482b1eb4516d89a5be07f984a421548fd1c966979f01941928ee03a +size 1797770 diff --git a/17/replication_package/code/LICENSE-code.md b/17/replication_package/code/LICENSE-code.md new file mode 100644 index 0000000000000000000000000000000000000000..0f47ff517e199fa5de33feb56b5a6c4d39e7fd9a --- /dev/null +++ b/17/replication_package/code/LICENSE-code.md @@ -0,0 +1,11 @@ +MIT License + +Copyright (c) 2021 Hunt Allcott, Matthew Gentzkow, and Lena Song + +Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + + diff --git a/17/replication_package/code/LICENSE-data.md b/17/replication_package/code/LICENSE-data.md new file mode 100644 index 0000000000000000000000000000000000000000..195288637dcd95a940521b54451bb6899832d371 --- /dev/null +++ b/17/replication_package/code/LICENSE-data.md @@ -0,0 +1,297 @@ + +[Creative Commons Attribution 4.0 International Public License](https://creativecommons.org/licenses/by/4.0/) +- applies to databases, images, tables, text, and any other objects + + +#### Creative Commons Attribution 4.0 International Public License +By exercising the Licensed Rights (defined below), You accept and agree +to be bound by the terms and conditions of this Creative Commons +Attribution 4.0 International Public License ("Public License"). To the +extent this Public License may be interpreted as a contract, You are +granted the Licensed Rights in consideration of Your acceptance of these +terms and conditions, and the Licensor grants You such rights in +consideration of benefits the Licensor receives from making the Licensed +Material available under these terms and conditions. + +Section 1 – Definitions. + +(a) Adapted Material means material subject to Copyright and Similar +Rights that is derived from or based upon the Licensed Material and in +which the Licensed Material is translated, altered, arranged, +transformed, or otherwise modified in a manner requiring permission +under the Copyright and Similar Rights held by the Licensor. For +purposes of this Public License, where the Licensed Material is a +musical work, performance, or sound recording, Adapted Material is +always produced where the Licensed Material is synched in timed relation +with a moving image. + +(b) Adapter's License means the license You apply to Your Copyright +and Similar Rights in Your contributions to Adapted Material in +accordance with the terms and conditions of this Public License. + +(c) Copyright and Similar Rights means copyright and/or similar rights +closely related to copyright including, without limitation, performance, +broadcast, sound recording, and Sui Generis Database Rights, without +regard to how the rights are labeled or categorized. For purposes of +this Public License, the rights specified in Section 2(b)(1)-(2) are not +Copyright and Similar Rights. + +(d) Effective Technological Measures means those measures that, in the +absence of proper authority, may not be circumvented under laws +fulfilling obligations under Article 11 of the WIPO Copyright Treaty +adopted on December 20, 1996, and/or similar international agreements. + +(e) Exceptions and Limitations means fair use, fair dealing, and/or any +other exception or limitation to Copyright and Similar Rights that +applies to Your use of the Licensed Material. + +(f) Licensed Material means the artistic or literary work, database, or +other material to which the Licensor applied this Public License. + +(g) Licensed Rights means the rights granted to You subject to the terms +and conditions of this Public License, which are limited to all +Copyright and Similar Rights that apply to Your use of the Licensed +Material and that the Licensor has authority to license. + +(h) Licensor means the individual(s) or entity(ies) granting rights +under this Public License. + +(i) Share means to provide material to the public by any means or +process that requires permission under the Licensed Rights, such as +reproduction, public display, public performance, distribution, +dissemination, communication, or importation, and to make material +available to the public including in ways that members of the public may +access the material from a place and at a time individually chosen by +them. + +(j) Sui Generis Database Rights means rights other than copyright +resulting from Directive 96/9/EC of the European Parliament and of the +Council of 11 March 1996 on the legal protection of databases, as +amended and/or succeeded, as well as other essentially equivalent rights +anywhere in the world. + +(k) You means the individual or entity exercising the Licensed Rights +under this Public License. Your has a corresponding meaning. + +Section 2 – Scope. + +(a) License grant. + + 1. Subject to the terms and conditions of this Public License, the + Licensor hereby grants You a worldwide, royalty-free, + non-sublicensable, non-exclusive, irrevocable license to exercise the + Licensed Rights in the Licensed Material to: + + A. reproduce and Share the Licensed Material, in whole or in part; + and + + B. produce, reproduce, and Share Adapted Material. + + 2. Exceptions and Limitations. For the avoidance of doubt, where + Exceptions and Limitations apply to Your use, this Public License does + not apply, and You do not need to comply with its terms and + conditions. + + 3. Term. The term of this Public License is specified in Section 6(a). + + 4. Media and formats; technical modifications allowed. The Licensor + authorizes You to exercise the Licensed Rights in all media and + formats whether now known or hereafter created, and to make technical + modifications necessary to do so. The Licensor waives and/or agrees + not to assert any right or authority to forbid You from making + technical modifications necessary to exercise the Licensed Rights, + including technical modifications necessary to circumvent Effective + Technological Measures. For purposes of this Public License, simply + making modifications authorized by this Section 2(a)(4) never produces + Adapted Material. + + 5. Downstream recipients. + + A. Offer from the Licensor – Licensed Material. Every recipient of + the Licensed Material automatically receives an offer from the + Licensor to exercise the Licensed Rights under the terms and + conditions of this Public License. + + B. No downstream restrictions. You may not offer or impose any + additional or different terms or conditions on, or apply any + Effective Technological Measures to, the Licensed Material if + doing so restricts exercise of the Licensed Rights by any + recipient of the Licensed Material. + + 6. No endorsement. Nothing in this Public License constitutes or may + be construed as permission to assert or imply that You are, or that + Your use of the Licensed Material is, connected with, or sponsored, + endorsed, or granted official status by, the Licensor or others + designated to receive attribution as provided in Section + 3(a)(1)(A)(i). + +(b) Other rights. + + 1. Moral rights, such as the right of integrity, are not licensed + under this Public License, nor are publicity, privacy, and/or other + similar personality rights; however, to the extent possible, the + Licensor waives and/or agrees not to assert any such rights held by + the Licensor to the limited extent necessary to allow You to exercise + the Licensed Rights, but not otherwise. + + 2. Patent and trademark rights are not licensed under this Public + License. + + 3. To the extent possible, the Licensor waives any right to collect + royalties from You for the exercise of the Licensed Rights, whether + directly or through a collecting society under any voluntary or + waivable statutory or compulsory licensing scheme. In all other cases + the Licensor expressly reserves any right to collect such royalties. + +Section 3 – License Conditions. + +Your exercise of the Licensed Rights is expressly made subject to the +following conditions. + +(a) Attribution. + + 1. If You Share the Licensed Material (including in modified form), + You must: + + A. retain the following if it is supplied by the Licensor with the + Licensed Material: + + i. identification of the creator(s) of the Licensed Material and + any others designated to receive attribution, in any reasonable + manner requested by the Licensor (including by pseudonym if + designated); + + ii. a copyright notice; + + iii. a notice that refers to this Public License; + + iv. a notice that refers to the disclaimer of warranties; + + v. a URI or hyperlink to the Licensed Material to the extent + reasonably practicable; + + B. indicate if You modified the Licensed Material and retain an + indication of any previous modifications; and + + C. indicate the Licensed Material is licensed under this Public + License, and include the text of, or the URI or hyperlink to, this + Public License. + + 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable + manner based on the medium, means, and context in which You Share the + Licensed Material. For example, it may be reasonable to satisfy the + conditions by providing a URI or hyperlink to a resource that includes + the required information. + + 3. If requested by the Licensor, You must remove any of the + information required by Section 3(a)(1)(A) to the extent reasonably + practicable. + + 4. If You Share Adapted Material You produce, the Adapter's License + You apply must not prevent recipients of the Adapted Material from + complying with this Public License. + +Section 4 – Sui Generis Database Rights. + +Where the Licensed Rights include Sui Generis Database Rights that apply +to Your use of the Licensed Material: + +(a) for the avoidance of doubt, Section 2(a)(1) grants You the right to +extract, reuse, reproduce, and Share all or a substantial portion of the +contents of the database; + +(b) if You include all or a substantial portion of the database contents +in a database in which You have Sui Generis Database Rights, then the +database in which You have Sui Generis Database Rights (but not its +individual contents) is Adapted Material; and + +(c) You must comply with the conditions in Section 3(a) if You Share all +or a substantial portion of the contents of the database. + +For the avoidance of doubt, this Section 4 supplements and does not +replace Your obligations under this Public License where the Licensed +Rights include other Copyright and Similar Rights. + +Section 5 – Disclaimer of Warranties and Limitation of Liability. + +(a) Unless otherwise separately undertaken by the Licensor, to the +extent possible, the Licensor offers the Licensed Material as-is and +as-available, and makes no representations or warranties of any kind +concerning the Licensed Material, whether express, implied, statutory, +or other. This includes, without limitation, warranties of title, +merchantability, fitness for a particular purpose, non-infringement, +absence of latent or other defects, accuracy, or the presence or absence +of errors, whether or not known or discoverable. Where disclaimers of +warranties are not allowed in full or in part, this disclaimer may not +apply to You. + +(b) To the extent possible, in no event will the Licensor be liable to +You on any legal theory (including, without limitation, negligence) or +otherwise for any direct, special, indirect, incidental, consequential, +punitive, exemplary, or other losses, costs, expenses, or damages +arising out of this Public License or use of the Licensed Material, even +if the Licensor has been advised of the possibility of such losses, +costs, expenses, or damages. Where a limitation of liability is not +allowed in full or in part, this limitation may not apply to You. + +(c) The disclaimer of warranties and limitation of liability provided +above shall be interpreted in a manner that, to the extent possible, +most closely approximates an absolute disclaimer and waiver of all +liability. + +Section 6 – Term and Termination. + +(a) This Public License applies for the term of the Copyright and +Similar Rights licensed here. However, if You fail to comply with this +Public License, then Your rights under this Public License terminate +automatically. + +(b) Where Your right to use the Licensed Material has terminated under +Section 6(a), it reinstates: + + 1. automatically as of the date the violation is cured, provided it is + cured within 30 days of Your discovery of the violation; or + + 2. upon express reinstatement by the Licensor. + + For the avoidance of doubt, this Section 6(b) does not affect any + right the Licensor may have to seek remedies for Your violations of + this Public License. + +(c) For the avoidance of doubt, the Licensor may also offer the Licensed +Material under separate terms or conditions or stop distributing the +Licensed Material at any time; however, doing so will not terminate this +Public License. + +(d) Sections 1, 5, 6, 7, and 8 survive termination of this Public License. + +Section 7 – Other Terms and Conditions. + +(a) The Licensor shall not be bound by any additional or different terms +or conditions communicated by You unless expressly agreed. + +(b) Any arrangements, understandings, or agreements regarding the +Licensed Material not stated herein are separate from and independent of +the terms and conditions of this Public License. + +Section 8 – Interpretation. + +(a) For the avoidance of doubt, this Public License does not, and shall +not be interpreted to, reduce, limit, restrict, or impose conditions on +any use of the Licensed Material that could lawfully be made without +permission under this Public License. + +(b) To the extent possible, if any provision of this Public License is +deemed unenforceable, it shall be automatically reformed to the minimum +extent necessary to make it enforceable. If the provision cannot be +reformed, it shall be severed from this Public License without affecting +the enforceability of the remaining terms and conditions. + +(c) No term or condition of this Public License will be waived and no +failure to comply consented to unless expressly agreed to by the +Licensor. + +(d) Nothing in this Public License constitutes or may be interpreted as +a limitation upon, or waiver of, any privileges and immunities that +apply to the Licensor or You, including from the legal processes of any +jurisdiction or authority. diff --git a/17/replication_package/code/README.md b/17/replication_package/code/README.md new file mode 100644 index 0000000000000000000000000000000000000000..c7de4d8eeab0a7f57baa8a3be636ad62d6360a59 --- /dev/null +++ b/17/replication_package/code/README.md @@ -0,0 +1,303 @@ +# README + +## Overview +The code in this replication package constructs the analysis tables, figures and scalars found in our paper using Stata and R. +The results presented in our paper are obtained in three steps. +In the first step, all original raw data is processed from our server. +In the second step, the raw data is stripped of any PII elements and all anonymized datasets are merged together to create a dataset named `final_data_sample.dta`. +All of the analysis presented in the paper is based on this anonymized data. +In the third step, all descriptive tables and figures as well as all regression outputs are produced. + +In this replication archive, we provide the code necessary to carry out all three steps. We provide the anonymized dataset `final_data_sample.dta` in a separate archive (Allcott, Gentzkow, and Song, 2023): https://doi.org/10.7910/DVN/GN636M. + +Under access to that separate archive, please download `final_data_sample.dta`. Then, manually add the dataset to this archive under the folder `data/temptation/output`. This will allow you to run the `/analysis/` module (third step) which constructs the tables, figures and scalars found in the paper. The module `/data/` relies on confidential data which is not provided, and therefore will not properly run. + +This replication archive contains additional files to help replication in the `/docs/` folder. +The first is called `DescriptionOfSteps.pdf` and it describes which modules are included in either steps as well as how they relate to each other. The file `Step1_Step2_DAG.pdf` illustrates how step 1 and 2 are carried via a directed-acyclic graph. +The third is called `MappingsTablesAndFigures.pdf` and it provides a mapping of all the tables +and figures to their corresponding program. + +The replication routine can be run by following the instructions in the **Instructions to replicators** section of this README. +Support by the authors for replication is provided if necessary. The replicator should expect the code to run for about 40 minutes. + + +## Data Availability and Provenance Statements +This archive includes data that was collected from an Android application and from surveys as detailed in Section 3 of the paper. +The folder `experiment_design` contains the questionnaires of all 5 surveys (recruitment survey and the next 4 surveys administered to our sample). It also contains a subfolder `AppScreenshots` that has various screenshots of our application Phone Dashboard. +In the separate archive, we provide the anonymized dataset `final_data_sample.dta` which gathers aggregated usage data from the application and survey data. Each individual in our final sample is assigned a user ID. Variables that correspond to information coming from the application start with `PD_P1`, `PD_P2`, `PD_P3`, `PD_P4`, `PD_P5` (depending on which period 1-5 they were collected). Variables that correspond to information coming from surveys start with either `S1_`, `S2_`, `S3_` or `S4_` (depending on which survey 1-4 they were collected). +The `codebook.xlsx` file at the root of the repository is the codebook for `final_data_sample.dta`. It lists all the variables found in the dataset along with their labels, units and values (if applicable). + +### Statement about Rights +We certify that the author(s) of the manuscript have legitimate access to and permission to use the data used in this manuscript. + +### License for Data + +All databases, images, tables, text, and any other objects are available under a Creative Commons Attribution 4.0 International Public License. Please refer to the document `LICENSE-data.md` at the root of the repository. + +### Summary of Availability + +Some data **cannot be made** publicly available. + + +### Details on each Data Source + +The raw data for this project are confidential and were collected by the authors. The authors will assist with any reasonable replication attempts and can be contacted by email. This paper uses data obtained from an Android application Phone Dashboard and surveys. + +The dataset `final_data_sample.dta`, provided in the separate archive, combines the data from both our surveys and our Phone Dashboard application. It is derived after processing all the raw confidential data from the Phone Dashboard application. This dataset aggregates usage data at the user level and combines it with variables obtained from our surveys. All variables in this dataset have corresponding value labels. One can also refer to the provided codebook, `codebook.xlsx`, at the root of the repository, for more information on each variable. + +## Dataset list +- As detailed in the graph `doc/Step1_Step2_DAG.pdf`, our pipeline processes the raw data from our Phone Dashboard application as well as from our surveys. The code for this data-processing is provided in the `/data/` folder. Multiple intermediate files are generated through this pipeline. These files are not provided as part of this replication archive for confidentiality reasons. + +- The file `final_data_sample.dta`, provided in the separate archive, is obtained at the end of the data-processing pipeline. It combines data from the application and the surveys. It serves as input for the analysis figures and tables. This file is provided in this replication archive. + + + +## Computational requirements +All requirements must be installed and set up for command line usage. For further detail, see the **Command Line Usage** section below. + +We manage Python and R installations using conda or miniconda. +To build the repository as-is, the following applications are additionally required: +* LyX 2.3.5.2 +* R 3.6.3 +* Stata 16.1 +* Python 3.7 + +These software are used by the scripts contained in the repository in the `setup` folder. Instructions to set up the environment are found below in the section `Instructions to run the repository`. + +### Software requirements +The file `setup/conda_env.yaml` will install all the R and Python dependencies. Please refer to the section `Instructions to run the repository` for detailed steps on how to install the required environment and run the scripts. +Below we list the softwares and packages required to run the repository with the version used. + + - Python 3.7 + - `pyyaml` (5.3.1) + - `numpy` (1.16.2) + - `pandas` (0.25.0) + - `matplotlib` (3.0.3) + - `gitpython` (2.1.15) + - `termcolor` (1.1.0) + - `colorama` (0.4.3) + - `jupyter` (4.6.3) + - `future` (0.17.1) + - `linearmodels` (4.17) + - `patsy` (0.5.1) + - `stochatreat` (0.0.8) + - `pympler` (0.9) + - `memory_profiler` + - `dask`(1.2.1) + - `openpyxl` (2.6.4) + - `requests` (2.24.0) + - `pip` (19) + + - R 3.6 + - `yaml` (2.2.1) + - `haven` (2.3.1) + - `tidyverse` (1.3.1) + - `r.utils` (4.0.3) + - `plm` (2.6.1) + - `janitor` (2.1.0) + - `rio` (0.5.26) + - `lubridate` (1.7.10) + - `magrittr` (2.0.1) + - `stargazer` (5.2.2) + - `rootSolve` (1.8.2.1) + - `rlist` (0.4.6.1) + - `ebal` (0.1.6) + - `latex2exp` (0.5.0) + - `estimatr` (0.30.2) + +### Controlled Randomness +We control randomness by setting random seeds. +1. For the data-processing: The program`/data/source/clean_master/cleaner.py` has its own random seed set on line 24. The program `data/source/build_master/builder.py` calls the `/lib/` file `lib/data_helpers/clean_survey.py` that sets a random seed on line 21.The program `lib/experiment_specs/study_config.py` contains parameters used by `data/source/clean_master/management/earnings.py` and `data/source/clean_master/management/midline_prep.py` which include a random seed set on line 459. +2. For the analysis: The program `lib/ModelFunctions.R` contains parameters used by `structural/code/StructuralModel.R` and `treatment_effects/code/ModelHeterogeneity.R` which include a random seed set on line 48. + +### Memory and Runtime Requirements +The folder `/data/` is responsible for all the data-processing using the raw Phone Dashboard data as well as the survey data. At the end of this data-processing, the file `final_data_sample.dta` is created. In the presence of the raw confidential data (which is not provided with this replication archive), this whole process normally takes around 60 hours on 20 CPUs and 12GB memory per CPU. + +The folder `/analysis/` is responsible for the construction of all the figures, plots and scalars used in the paper using the `final_data_sample.dta` dataset provided in the separate archive. The replicator will be able to run all scripts in this folder. The whole analysis takes around 40 minutes to run on a computer with 4 cores and 16GB of memory. Most files within `analysis` take less than 5 minutes to run. However, the file `analysis/code/StructuralModel.R` takes around 20 minutes to run. + +#### Summary +Approximate time needed to reproduce the analyses on a standard (2022) desktop machine is <1 hour. + +#### Details +The `analysis` code was last run on a **4-core Intel-based laptop with MacOS version 10.15.5**. + +The `data` code was last run on a **an Intel server with 20 CPUs and 12GB of memory per CPU**. Computation took 60 hours. + + +## Description of programs/code +In this replication archive : +- The folder `/data/source/` is responsible for all the data processing of our Phone Dashboard application and our surveys. +The subfolders `/data/source/build_master/`, `/data/source/clean_master/` and `/data/source/exporters/` contains Python files that define the classes and auxiliary functions called in the main script `/data/run.py`. This main script generates the master files gathering all information at the user-level or at the user-app-level. + +- The folder `/data/temptation/` is responsible for cleaning the master files produced as output of `/data/source/`. +It outputs the anonymized dataset `final_data_sample.dta` which contains all the information at the user level. This dataset is used throughout the analysis of the paper. + +- The folder `/analysis/` contains all the programs generating the tables, figures and scalars in the paper. The programs in the `/analysis/` folder has been categorised under three subfolders: + + 1. `/analysis/descriptive/` produces tables and charts of descriptives statistics. It contains the below programs: + * `code/CommitmentDemand.do` (willingness-to-pay and limit tightness plots) + * `code/COVIDResponse.do` (survey stats on response to COVID) + * `code/DataDescriptive.do` (sample demographics and attrition tables) + * `code/HeatmapPlots.R` (predicted vs. actual FITSBY usage) + * `code/QualitativeEvidence.do` (descriptive plots for addiction scale, interest in bonus/limit) + * `code/SampleStatistics.do` (statistics about completion rates for study) + * `code/Scalars.do` (statistics about MPL and ideal usage reduction) + * `code/Temptation.do` (plots desired usage change for various tempting activities) + + 2. `/analysis/structural/` estimates parameters and generates plots for our structural model. It contains the below program: + * `code/StructuralModel.R` + + 3. `/analysis/treatment_effects/` produces model-free estimates of treatment effects. It contains the below programs : + * `code/Beliefs.do` (compares actual treatment effect with predicted treatment effect) + * `code/CommitmentResponse.do` (plots how treatment effect differs by SMS addiction scale and other survey indicators) + * `code/FDRTable.do` (estimates how treatment effect differs by SMS addiction scale and other indicators, adjusted for false-discovery rate. Also plots some descriptive statistics) + * `code/HabitFormation.do` (compares actual and predicted usage) + * `code/Heterogeneity.do` (plots heterogeneous treatment effects) + * `code/HeterogeneityInstrumental.do` (plots heterogeneous treatment effects) + * `code/ModelHeterogeneity.R` (generates other heterogeneity plots, some temptation plots) + * `code/SurveyValidation.do` (plots effect of rewarding accurate usage prediction on usage prediction accuracy) + Most of the programs in the analysis folder rely on the dataset `final_data_sample.dta`. However, some programs further require the datasets `final_data.dta` and `AnalysisUser.dta` to compte certain scalars mentioned in the paper. These programs are `/analysis/descriptive/code/DataDescriptive.do`, `/analysis/descriptive/code/SampleStatistics.do`, `/analysis/descriptive/code/Scalars.do` and `/analysis/treatment_effects/code/ModelHeterogeneity.R`. Since these two datasets are not provided with the replication archive for confidentiality reasons, the portions of the code requiring them have been commented out in the relevant programs. + + - The folder `/lib/` contains auxiliary functions and helpers. + - The folder `/paper_slides/` contains all the input and files necessary to the compiling of the paper. The subfolder `/paper_slides/figures/` contains screenshots and other figures that are not derived from programs. The subfolder `/paper_slides/figures/` contains the paper Lyx file, the bibliography as well as the `motivation_correlation.lyx` Lyx table. + - The folder `setup` contains files to setup the conda environment as well as to install the R, Python and Stata dependencies. + - The folder `experiment_design` contains the questionnaires to our surveys as well as screenshots from the Phone Dashboard application. + - The folder `/docs/` contains additional documents to guide the replicator. The file `docs/DescriptionOfSteps.pdf` gives a high-level overview of the steps involved in the data processing from our + application Phone Dashboard to the analysis in the paper. It splits the data-processing into three steps : + 1) Processing the Raw Data from PhoneDashboard (done by the `/data/source/` folder) + 2) Cleaning the Original Data from PhoneDashboard (done by the `/data/temptation/` folder) + 3) Analyze the Anonymized Data (done by the `/analysis/` folder) + Since the data inputs for step 1 and 2 are not provided with this replication archive, we include a further document `docs/Step1_Step2_DAG.pdf` that illustrate how we carried them internally via a + directed-acyclic graph. Finally, the file `docs/MappingsTablesAndFigures.pdf` provides a mapping of all the tables and figures to their corresponding program. + + Note that the modules or portions of programs that cannot be run due to unshared data have been commented out in the relevant main run scripts. + +### License for code + +All code is available under a MIT License. Please refer to the document `LICENSE-code.md` at the root of the repository. + +## Instructions to replicators + +### Setup + +1. Create a `config_user.yaml` file in the root directory. A template can be found in the `setup` subdirectory. See the **User Configuration** section below for further detail. If you do not have any external paths you wish to specify, and wish to use the default executable names you can skip this step and the default `config_user.yaml` will be copied over in step 4. + +2. If you already have conda setup on your local machine, feel free to skip this step. If not, this will install a lightweight version of conda that will not interfere with your current python and R installations. +Install miniconda and jdk to be used to manage the R/Python virtual environment, if you have not already done this. You can install these programs from their websites [here for miniconda](https://docs.conda.io/en/latest/miniconda.html) and [here for jdk](https://www.oracle.com/java/technologies/javase-downloads.html). If you use homebrew (which can be download [here](https://brew.sh/)) these two programs can be downloaded as follows: + ``` + brew install --cask miniconda + brew install --cask oracle-jdk + ``` +Once you have done this you need to initialize conda by running the following lines and restarting your terminal: + ``` + conda config --set auto_activate_base false + conda init $(echo $0 | cut -d'-' -f 2) + ``` + +3. Create conda environment with the command: + ``` + conda env create -f setup/conda_env.yaml + ``` + +4. Run the `check_setup.py` file. One way to do this is to run the following bash command in a terminal from the `setup` subdirectory: + ``` + python3 check_setup.py + ``` + +5. Install R dependencies that cannot be managed using conda with the `setup_r.r` file. One way to do this is to run the following bash command in a terminal from the `setup` subdirectory: + ``` + Rscript setup_r.r + ``` + +### Usage + +Once you have succesfully completed the **Setup** section above, each time that you run any analysis make sure the virtual environment associated with this project is activated, using the command below (replacing with the name of this project). +``` + conda activate PROJECT_NAME +``` +If you wish to return to your base installation of python and R you can easily deactivate this virtual environment using the command below: +``` + conda deactivate +``` + +### Adding Packages +#### Python +Add any required packages to `setup/conda_env.yaml`. If possible add the package version number. If there is a package that is not available from `conda` add this to the `pip` section of the `yaml` file. In order to not re-run the entire environment setup you can download these individual files from `conda` with the command + +``` + conda install -c conda-forge +``` + +#### R +Add any required packages that are available via CRAN to `setup/conda_env.yaml`. These must be prepended with `r-`. If there is a package that is only available from GitHub and not from CRAN, add this package to `setup/setup_r.r`. These individual packages can be added in the same way as Python packages above (with the `r-` prepend). + +#### Stata + +Install Stata dependencies using `setup/download_stata_ado.do`. We keep all non-base Stata ado files in the `lib` subdirectory, so most non-base Stata ado files will be versioned. To add additional stata dependencies, use the following bash command from the `setup` subdirectory: +``` +stata-mp -e download_stata_ado.do +``` + +### Build + +1. Follow the *Setup* instructions above. + +2. From the root of repository, run the following bash command: + ``` + python run_all.py + ``` + +### Command Line Usage + +For specific instructions on how to set up command line usage for an application, refer to the [RA manual](https://github.com/gentzkow/template/wiki/Command-Line-Usage). + +By default, the repository assumes the following executable names for the following applications: + +``` +application : executable +python : python +lyx : lyx +r : Rscript +stata : statamp (will need to be updated if using a version of Stata that is not Stata-MP) +``` + +Default executable names can be updated in `config_user.yaml`. For further detail, see the **User Configuration** section below. + +## User Configuration +`config_user.yaml` contains settings and metadata such as local paths that are specific to an individual user and thus should not be committed to Git. For this repository, this includes local paths to [external dependencies](https://github.com/gentzkow/template/wiki/External-Dependencies) as well as executable names for locally installed software. + +Required applications may be set up for command line usage on your computer with a different executable name from the default. If so, specify the correct executable name in `config_user.yaml`. This configuration step is explained further in the [RA manual](https://github.com/gentzkow/template/wiki/Repository-Structure#Configuration-Files). + +## Windows Differences +The instructions above are for Linux and Mac users. However, with just a handful of small tweaks, this repo can also work on Windows. + +If you are using Windows, you may need to run certain bash commands in administrator mode due to permission errors. To do so, open your terminal by right clicking and selecting `Run as administrator`. To set administrator mode on permanently, refer to the [RA manual](https://github.com/gentzkow/template/wiki/Repository-Usage#Administrator-Mode). + +The executable names are likely to differ on your computer if you are using Windows. Executable names for Windows will typically look like the following: + +``` +application : executable +python : python +lyx : LyX#.# (where #.# refers to the version number) +r : Rscript +stata : StataMP-64 (will need to be updated if using a version of Stata that is not Stata-MP or 64-bit) +``` + +To download additional `ado` files on Windows, you will likely have to adjust this bash command: +``` +stata_executable -e download_stata_ado.do +``` + +`stata_executable` refers to the name of your Stata executable. For example, if your Stata executable was located in `C:\Program Files\Stata15\StataMP-64.exe`, you would want to use the following bash command: + +``` +StataMP-64 -e download_stata_ado.do +``` + + +## List of tables and programs +The file `docs/MappingsTablesAndFigures.pdf` provides a mapping of all the tables and figures to their corresponding program. + +## References +Allcott, Hunt, Matthew Gentzkow, and Lena Song. “Data for: Digital Addiction.” Harvard Dataverse, 2023. https://doi.org/10.7910/DVN/GN636M. +Allcott, Hunt, Matthew Gentzkow, and Lena Song. “Digital Addiction.” American Economic Review 112, no. 7 (July 2022): 2424–63. https://doi.org/10.1257/aer.20210867. diff --git a/17/replication_package/code/README.pdf b/17/replication_package/code/README.pdf new file mode 100644 index 0000000000000000000000000000000000000000..ab482955c2460f6af9a76dedf0d19b1c77549587 --- /dev/null +++ b/17/replication_package/code/README.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:0accfd826dda5929fe257e41b3c25bde9551fa0ef29d46c999fa967245c811a0 +size 92738 diff --git a/17/replication_package/code/analysis/descriptive/README.md b/17/replication_package/code/analysis/descriptive/README.md new file mode 100644 index 0000000000000000000000000000000000000000..b289a549875253a52448791332ada6578d657d24 --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/README.md @@ -0,0 +1,21 @@ +# README + +This module produces tables and charts of descriptives statistics. + +`/code/` contains the below files : + +* CommitmentDemand.do (willingness-to-pay and limit tightness plots) + +* COVIDResponse.do (survey stats on response to COVID) + +* DataDescriptive.do (sample demographics and attrition tables) + +* HeatmapPlots.R (predicted vs. actual FITSBY usage) + +* QualitativeEvidence.do (descriptive plots for addiction scale, interest in bonus/limit) + +* SampleStatistics.do (statistics about completion rates for study) + +* Scalars.do (statistics about MPL and ideal usage reduction) + +* Temptation.do (plots desired usage change for various tempting activities) diff --git a/17/replication_package/code/analysis/descriptive/code/COVIDResponse.do b/17/replication_package/code/analysis/descriptive/code/COVIDResponse.do new file mode 100644 index 0000000000000000000000000000000000000000..40f190db6ebfabbcbebbc8fffef6515fc61b20a4 --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/COVIDResponse.do @@ -0,0 +1,168 @@ +// Baseline qualitative evidence + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" +end + +program define_plot_settings + global HIST_SETTINGS /// + xtitle(" " "Fraction of sample") /// + bcolor(maroon) graphregion(color(white)) /// + xsize(6.5) ysize(4.5) + + global HIST_DISCRETE_SETTINGS /// + gap(50) ylabel(, valuelabel noticks angle(horizontal)) /// + $HIST_SETTINGS + + global CISPIKE_SETTINGS /// + spikecolor(maroon gray) /// + cicolor(maroon gray) /// + + global CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(#6) /// + xsize(6.5) ysize(4.5) + + global CISPIKE_STACKED_GRAPHOPTS /// + row(2) /// + graphregion(color(white)) /// + xsize(5.5) ysize(8) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + plot_hist_covid + plot_cispike_covid +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program plot_hist_covid + twoway hist S1_CovidChangesFreeTime, frac discrete horizontal /// + $HIST_DISCRETE_SETTINGS /// + ytitle("Change in free time" " ") /// + ylabel(1(1)7) + + graph export "output/hist_covid.pdf", replace + + recode S1_CovidChangeReason /// + (1 = 4 "Increased phone usage") /// + (2 = 4 "Increased phone usage") /// + (3 = 3 "No change") /// + (4 = 4 "Increased phone usage") /// + (5 = 2 "Decreased phone usage") /// + (6 = 1 "Other"), /// + gen(S1_CovidChangeReason_recode) + + twoway hist S1_CovidChangeReason_recode, /// + frac discrete horizontal /// + $HIST_DISCRETE_SETTINGS /// + ytitle("Effect of COVID-19 on phone use" " ") + + graph export "output/hist_covid_reason.pdf", replace +end + +program plot_cispike_covid + * Preserve data + preserve + + * Reshape data + keep UserID S1_PhoneUseChange* S1_LifeBetter* + rename S1_PhoneUseChange* S1_PhoneUseChange_* + rename S1_LifeBetter* S1_LifeBetter_* + rename_but, varlist(UserID) prefix(outcome) + reshape long outcome, i(UserID) j(measure) string + + split measure, p("_") + drop measure measure1 + rename (measure2 measure3) (measure time) + replace time = "2020" if time == "" + + * Recode data + encode measure, generate(measure_encode) + encode time, generate(time_encode) + + recode measure_encode /// + (1 = 1 "Phone use makes life better") /// + (2 = 2 "Ideal use change"), /// + gen(measure_recode) + + recode time_encode /// + (1 = 1 "2019") /// + (2 = 2 "Now"), /// + gen(time_recode) + + * Plot data + gen dummy = 1 + + + ttest outcome if measure_encode == 1, by(time_recode) + local diff : display %9.3fc `r(mu_2)' - `r(mu_1)' + local diff = subinstr("`diff'", " ", "", .) + local se : display %9.3fc `r(se)' + local se = subinstr("`se'", " ", "", .) + + ciquartile outcome if measure_encode == 1, /// + over1(dummy) over2(time_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Phone use makes life better" " ") /// + ysc(r(-1)) /// + legend(off) /// + text(-0.75 0 "Difference in means = `diff' (`se')", place(e))) + + graph save "output/cispike_covid_life.gph", replace + + ttest outcome if measure_encode == 2, by(time_recode) + local diff : display %9.3fc `r(mu_2)' - `r(mu_1)' + local diff = subinstr("`diff'", " ", "", .) + local se : display %9.3fc `r(se)' + local se = subinstr("`se'", " ", "", .) + + ciquartile outcome if measure_encode == 2, /// + over1(dummy) over2(time_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Ideal use change" " ") /// + ysc(r(-40)) /// + legend(off) /// + text(-37.5 0 "Difference in means = `diff' (`se')", place(e))) + + graph save "output/cispike_covid_ideal.gph", replace + + graph combine /// + "output/cispike_covid_ideal.gph" /// + "output/cispike_covid_life.gph", /// + $CISPIKE_STACKED_GRAPHOPTS + + graph export "output/cispike_covid.pdf", replace + + * Restore data + restore +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/descriptive/code/CommitmentDemand.do b/17/replication_package/code/analysis/descriptive/code/CommitmentDemand.do new file mode 100644 index 0000000000000000000000000000000000000000..8a487c013e2ac842a51d40955d4eb82445662417 --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/CommitmentDemand.do @@ -0,0 +1,457 @@ +// Demand for commitment, moderated by demand for flexibility + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" +end + +program define_plot_settings + global HIST_SETTINGS /// + bcolor(maroon) graphregion(color(white)) /// + xsize(6.5) ysize(4.5) + + global HIST_DISCRETE_SETTINGS /// + gap(50) xlabel(, valuelabel noticks) /// + $HIST_SETTINGS + + global HIST_SNOOZE_SETTINGS /// + gap(50) ylabel(1(1)10, valuelabel noticks angle(horizontal) labsize(small)) /// + xtitle(" " "Fraction of sample") /// + $HIST_SETTINGS + + global HIST_CONTINUOUS_SETTINGS /// + $HIST_SETTINGS + + global CISPIKE_SETTINGS /// + spikecolor(maroon black gray) /// + cicolor(maroon black gray) + + global CISPIKE_SETTINGS4 /// + spikecolor(maroon black gray navy) /// + cicolor(maroon black gray navy) + + global CISPIKE_VERTICAL_LARGE_GRAPHOPTS /// + ylabel(#6) /// + xsize(8) ysize(4.5) /// + legend(cols(4)) + + global CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(#6) /// + xsize(6.5) ysize(4.5) /// + legend(cols(4)) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + plot_midline_demand + plot_wtp_for_rsi + plot_wtp_for_limit + plot_wtp_for_limit_by_limit + plot_wtp_for_limit_by_bonus + plot_limit_tight + plot_limit_tight, fitsby + plot_limit_tight_by_limit + plot_limit_tight_by_limit, fitsby + plot_limit_tight_dist + plot_preferred_snooze + plot_motivation_by_reason + plot_motivation_bar +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program plot_midline_demand + * Preserve data + preserve + + * Reshape data + keep UserID S2_PredictUseInitialEarn S2_PredictUseBonusEarn S2_MPL + rename_but, varlist(UserID) prefix(dollar) + reshape long dollar, i(UserID) j(measure) string + + * Recode data + encode measure, generate(measure_encode) + + * Plot data + gen dummy = 1 + + cispike dollar, /// + over1(dummy) over2(measure_encode) /// + $CISPIKE_SETTINGS gap2(100) /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Dollars" " ") /// + /// Labels too long for encode + xlabel(0.5 " " /// + 1 `" "Valuation of" "Bonus" "' /// + 3 `" "Expected earnings" "at predicted usage" "with Bonus" "' /// + 5 `" "Expected earnings" "at predicted usage" "without Bonus" "' /// + 5.5 " ") /// + legend(off)) + + graph export "output/cispike_midline_demand.pdf", replace + + * Restore data + restore +end + +program plot_wtp_for_rsi + hist S2_MPL, frac discrete /// + xtitle(" " "Valuation of bonus ($)") /// + ytitle("Fraction of sample" " ") /// + $HIST_DISCRETE_SETTINGS + + graph export "output/hist_rsi_wtp.pdf", replace +end + +program plot_wtp_for_limit + hist S3_MPLLimit, frac /// + xtitle(" " "Valuation of limit functionality ($)") /// + ytitle("Fraction of sample" " ") /// + $HIST_DISCRETE_SETTINGS + + graph export "output/hist_limit_wtp.pdf", replace +end + +program plot_wtp_for_limit_by_limit + * Preserve data + preserve + + * Add average + tempfile temp + save `temp', replace + keep if inlist(S2_LimitType, 1, 2, 3, 4, 5) + replace S2_LimitType = 6 + append using `temp' + + * Recode data + recode S2_LimitType /// + (0 = .) /// + (1 = 2 "Snooze 0") /// + (2 = 3 "Snooze 2") /// + (3 = 4 "Snooze 5") /// + (4 = 5 "Snooze 20") /// + (5 = 6 "No snooze") /// + (6 = 1 "All limits"), /// + gen(S2_LimitType_recode) + + * Plot data + gen dummy = 1 + + cispike S3_MPLLimit, /// + over1(dummy) over2(S2_LimitType_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_LARGE_GRAPHOPTS /// + ytitle("Willingness-to-pay for limit (dollars)" " ") /// + legend(off)) + + graph export "output/cispike_limit_wtp.pdf", replace + + * Restore data + restore +end + +program plot_wtp_for_limit_by_bonus + * Preserve data + preserve + + * Recode data + recode S3_Bonus /// + (0 = 0 "Control") /// + (1 = 1 "Bonus"), /// + gen(S3_Bonus_recode) + + * Plot data + gen dummy = 1 + + cispike S3_MPLLimit, /// + over1(dummy) over2(S3_Bonus_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_LARGE_GRAPHOPTS /// + ytitle("Willingness-to-pay for Limit (dollars)" " ") /// + legend(off)) + + graph export "output/cispike_limit_wtp_by_bonus.pdf", replace + + * Restore data + restore +end + +program plot_limit_tight + syntax, [fitsby] + + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + + else { + local fitsby "" + local suffix "" + } + + * Preserve data + preserve + + * Reshape data + keep UserID S2_LimitType *LimitTight`fitsby' + rename_but, varlist(UserID S2_LimitType) prefix(tight) + reshape long tight, i(UserID S2_LimitType) j(measure) string + + * Recode data + sort measure + encode measure, generate(measure_encode) + + recode measure_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (5 = 3 "Period 4") /// + (7 = 4 "Period 5") /// + (4 = 5 "Periods 3 & 4") /// + (3 = 6 "Periods 2 to 4") /// + (6 = 7 "Periods 2 to 5"), /// + gen(measure_recode) + + * Plot data + gen dummy = 1 + + cispike tight if measure_recode <= 4, /// + over1(dummy) over2(measure_recode) /// + $CISPIKE_SETTINGS4 /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Limit tightness (minutes/day)" " ") /// + legend(off)) + + graph export "output/cispike_limit_tight`suffix'.pdf", replace + + * Restore data + restore +end + +program plot_limit_tight_by_limit + syntax, [fitsby] + + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + + else { + local fitsby "" + local suffix "" + } + + * Preserve data + preserve + + * Reshape data + keep UserID S2_LimitType *LimitTight`fitsby' + rename_but, varlist(UserID S2_LimitType) prefix(tight) + reshape long tight, i(UserID S2_LimitType) j(measure) string + + * Recode data + sort measure + encode measure, generate(measure_encode) + + recode measure_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (5 = 3 "Period 4") /// + (7 = 4 "Period 5") /// + (4 = 5 "Periods 3 & 4") /// + (3 = 6 "Periods 2 to 4") /// + (6 = 7 "Periods 2 to 5"), /// + gen(measure_recode) + + recode S2_LimitType /// + (0 = .) /// + (1 = 1 "Snooze 0") /// + (2 = 2 "Snooze 2") /// + (3 = 3 "Snooze 5") /// + (4 = 4 "Snooze 20") /// + (5 = 5 "No snooze"), /// + gen(S2_LimitType_recode) + + * Plot data (all periods together) 2 - 5 + gen dummy = 1 + + cispike tight if measure_recode == 7, /// + over1(dummy) over2(S2_LimitType_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_LARGE_GRAPHOPTS /// + ytitle("Limit tightness (minutes/day)" " ") /// + xlabel(, labsize(medlarge)) xtitle(, size(medlarge)) /// + ylabel(, labsize(medlarge)) ytitle(, size(medlarge)) /// + legend(off)) + + graph export "output/cispike_limit_tight_combined_by_limit`suffix'.pdf", replace + + * Plot data (by period) + cispike tight if measure_recode <= 4, /// + over1(measure_recode) over2(S2_LimitType_recode) /// + $CISPIKE_SETTINGS4 /// + graphopts($CISPIKE_VERTICAL_LARGE_GRAPHOPTS /// + ytitle("Limit tightness (minutes/day)" " ") /// + xlabel(, labsize(medlarge)) xtitle(, size(medlarge)) /// + ylabel(, labsize(medlarge)) ytitle(, size(medlarge)) /// + legend(size(medlarge))) + + graph export "output/cispike_limit_tight_by_limit`suffix'.pdf", replace + + * Restore data + restore +end + +program plot_limit_tight_dist + * Preserve data + preserve + + * Plot data (by period) + hist PD_P2_LimitTight, frac /// + xtitle(" " "Period 2 limit tightness (minutes/day)") /// + ytitle("Fraction of sample" " ") /// + $HIST_CONTINUOUS_SETTINGS + + graph export "output/hist_limit_tight_p2.pdf", replace + + * Plot data (all periods together) + hist PD_P5432_LimitTight, frac /// + xtitle(" " "Periods 2 to 5 limit tightness (minutes/day)") /// + ytitle("Fraction of sample" " ") /// + $HIST_CONTINUOUS_SETTINGS + + graph export "output/hist_limit_tight.pdf", replace + + * Reshape data + keep UserID PD_P2_LimitTight_* + drop *Other + reshape long PD_P2, i(UserID) j(measure) string + + * Recode data + sort measure + encode measure, generate(measure_encode) + + recode measure_encode /// + (2 = 1 "Facebook") /// + (3 = 2 "Instagram") /// + (5 = 3 "Twitter") /// + (4 = 4 "Snapchat") /// + (1 = 5 "Browser") /// + (6 = 6 "YouTube"), /// + gen(measure_recode) + + * Plot data (by app) + local app_1 "Facebook" + local app_2 "Instagram" + local app_3 "Twitter" + local app_4 "Snapchat" + local app_5 "Browser" + local app_6 "YouTube" + + foreach num of numlist 1/6 { + hist PD_P2 if measure_encode == `num', frac /// + xtitle(" " "Period 2 limit tightness for `app_`num'' (minutes/day)") /// + ytitle("Fraction of sample" " ") /// + $HIST_CONTINUOUS_SETTINGS /// + xlabel(, labsize(large)) xtitle(, size(large)) /// + ylabel(, labsize(large)) ytitle(, size(large)) /// + legend(size(large)) + + graph export "output/hist_limit_tight_`num'.pdf", replace + } + + * Restore data + restore +end + +program plot_preferred_snooze + recode S4_PreferredSnooze /// + (1 = 1 "No delay") /// + (2 = 2 "1 minute") /// + (3 = 3 "2 minutes") /// + (4 = 4 "3-4 minutes") /// + (5 = 5 "5 minutes") /// + (6 = 6 "10 minutes") /// + (7 = 7 "20 minutes") /// + (8 = 8 "30 minutes+") /// + (9 = 9 "Prefer no snooze") /// + (10 = 10 "Does not matter"), /// + gen(S4_PreferredSnooze_short_names) + + twoway hist S4_PreferredSnooze_short_names, /// + frac discrete horizontal /// + $HIST_SNOOZE_SETTINGS /// + ytitle("Preferred Snooze Length (minutes)" " ") + + + graph export "output/hist_preferred_snooze.pdf", replace +end + +program plot_motivation_by_reason + preserve + * Plot data + gen dummy = 1 + + cispike S2_Motivation, /// + over1(dummy) over2(S2_MPLReasoning) /// + $CISPIKE_SETTINGS4 gap2(100) /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Behavior change premium ($)" " ") /// + /// Labels too long for encode + xlabel(0.5 " " /// + 1 `" "Only wanted" "to maximize" "earnings" "' /// + 3 `" "Wanted incentive" "to use phone" "less" "' /// + 5 `" "Don't want pressure" "to use phone" "less" "' /// + 7 `" "Other" "' /// + 7.5 " ") /// + legend(off)) + + graph export "output/cispike_motivation_reason.pdf", replace + + * Restore data + restore +end + +program plot_motivation_bar + preserve + * Plot data + twoway hist S2_MPLReasoning, frac discrete /// + $HIST_DISCRETE_SETTINGS /// + xlabel(1 `" "Only wanted" "to maximize" "earnings" "' /// + 2 `" "Wanted incentive" "to use phone" "less" "' /// + 3 `" "Don't want pressure" "to use phone" "less" "' /// + 4 `" "Other" "') /// + ytitle("Fraction of sample" " ") /// + xtitle("") + + + graph export "output/hist_motivation_mpl.pdf", replace + + * Restore data + restore +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/descriptive/code/DataDescriptives.do b/17/replication_package/code/analysis/descriptive/code/DataDescriptives.do new file mode 100644 index 0000000000000000000000000000000000000000..adbb35c81485ca49b585cfebc9df7b5d4cf0045b --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/DataDescriptives.do @@ -0,0 +1,668 @@ +// Description of data + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" +end + +program define_settings + global DESCRIPTIVE_TAB /// + collabels(none) nodepvars noobs replace + + global DESCRIPTIVE_TAB_DETAILED /// + nomtitle nonumbers noobs compress label replace /// + cells((mean(fmt(%8.1fc)) /// + sd(fmt(%8.1fc)) /// + min(fmt(%8.0fc)) /// + max(fmt(%8.0fc)))) /// + collabels("\shortstack{Mean}" /// + "\shortstack{Standard\\deviation}" /// + "\shortstack{Minimum\\value}" /// + "\shortstack{Maximum\\value}") /// + + global BALANCE_TAB /// + order(1 0) grplabels(1 Treatment @ 0 Control) /// + pftest pttest ftest fmissok vce(robust) stdev /// + rowvarlabel onenrow tblnonote format(%8.2fc) replace + + global HIST_CONTINUOUS_SETTINGS /// + bcolor(maroon) graphregion(color(white)) /// + xsize(6.5) ysize(4) + + global BAR_SETTINGS /// + region(lcolor(white))) graphregion(color(white)) /// + xsize(6.5) ysize(4) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_settings + import_data + clean_data + + sample_demographics_balance_all + sample_demographics + sample_demographics_balance + * limit_attrition + * bonus_attrition + balance + historical_use + historical_use, fitsby + summary_welfare + share_use_by_app + addiction_plot +end + +program import_data + use "input/final_data_sample.dta", clear + + foreach time in S3 S4 { + replace `time'_Finished = 0 if `time'_Finished == . + } +end + +program clean_data + * Demographics + recode S1_Income /// + (1 = 5) /// + (2 = 15) /// + (3 = 25) /// + (4 = 35) /// + (5 = 45) /// + (6 = 55) /// + (7 = 67) /// + (8 = 87.5) /// + (9 = 112.5) /// + (10 = 137.5) /// + (11 = 150) /// + (12 = .), /// + gen(income) + + gen college = (S1_Education >= 5) + gen male = (S0_Gender == 1) + gen white = (S1_Race == 5) + + * Limit treatment + gen limit_T = 1 if S2_LimitType > 0 & S2_LimitType != . + replace limit_T = 0 if S2_LimitType == 0 + + * Labels + label var college "College" + label var male "Male" + label var white "White" + + label var income "Income (\\$000s)" + label var S0_Age "Age" + label var PD_P1_UsageFITSBY "Period 1 FITSBY use (minutes/day)" +end + +program sample_demographics + local varset income college male white S0_Age PD_P1_Usage PD_P1_UsageFITSBY + + * Sample demographics + estpost tabstat `varset', statistics(mean) columns(statistics) + est store sample_col + + * Preserve data + preserve + + * US demographics + replace income = 43.01 + replace college = 0.3009 + replace male = 0.4867 + replace white = 0.73581 + replace S0_Age = 47.6 + replace PD_P1_Usage = . + replace PD_P1_UsageFITSBY = . + + estpost tabstat `varset', statistics(mean) columns(statistics) + est store us_col + + * Restore data + restore + + * Export table + esttab sample_col us_col using "output/sample_demographics.tex", /// + mtitle("\shortstack{Analysis\\sample}" /// + "\shortstack{U.S.\\adults}") /// + coeflabels(income "Income (\\$000s)" /// + college "College" /// + male "Male" /// + white "White" /// + S0_Age "Age" /// + PD_P1_Usage "Period 1 phone use (minutes/day)" /// + PD_P1_UsageFITSBY "Period 1 FITSBY use (minutes/day)") /// + $DESCRIPTIVE_TAB /// + cells(mean(fmt(%9.1fc %9.2fc %9.2fc %9.2fc %9.1fc %9.1fc %9.1fc))) + + est clear +end + +program sample_demographics_balance + local varset balance_income balance_college balance_male balance_white balance_age /// + PD_P1_Usage PD_P1_UsageFITSBY + + * Sample demographics + estpost tabstat `varset', statistics(mean) columns(statistics) + est store sample_col + + * Preserve data + preserve + + local income 43.01 + local college 0.3009 + local male 0.4867 + local white 0.73581 + local age 47.6 + + ebalance balance_income balance_college balance_male balance_white balance_age, /// + manualtargets(`income' `college' `male' `white' `age') generate(weight) + + * Winsorize weights + gen weight2 = weight + replace weight2 = 2 if weight2 > 2 + replace weight2 = 1/2 if weight2 < 1/2 + + estpost tabstat `varset' [weight=weight2], statistics(mean) columns(statistics) + est store sample_col_w2 + + * US demographics + replace balance_income = `income' + replace balance_college = `college' + replace balance_male = `male' + replace balance_white = `white' + replace balance_age = `age' + replace PD_P1_Usage = . + replace PD_P1_UsageFITSBY = . + + estpost tabstat `varset', statistics(mean) columns(statistics) + est store us_col + + * Restore data + restore + + * Export table + esttab sample_col sample_col_w2 us_col /// + using "output/sample_demographics_balance.tex", /// + mtitle("\shortstack{Analysis\\sample}" /// + "\shortstack{Balanced\\sample}" /// + "\shortstack{U.S.\\adults}" /// + ) /// + coeflabels(balance_income "Income (\\$000s)" /// + balance_college "College" /// + balance_male "Male" /// + balance_white "White" /// + balance_age "Age" /// + PD_P1_Usage "Period 1 phone use (minutes/day)" /// + PD_P1_UsageFITSBY "Period 1 FITSBY use (minutes/day)") /// + $DESCRIPTIVE_TAB /// + cells(mean(fmt(%9.1fc %9.2fc %9.2fc %9.2fc %9.1fc %9.1fc %9.1fc))) + + est clear +end + +program sample_demographics_balance_all + local varset balance_income balance_college balance_male balance_white balance_age /// + PD_P1_Usage PD_P1_UsageFITSBY + + * Sample demographics + estpost tabstat `varset', statistics(mean) columns(statistics) + est store sample_col + + * Preserve data + preserve + + local income 43.01 + local college 0.3009 + local male 0.4867 + local white 0.73581 + local age 47.6 + + ebalance balance_income balance_college balance_male balance_white balance_age, /// + manualtargets(`income' `college' `male' `white' `age') generate(weight) + + * Winsorize weights + gen weight2 = weight + replace weight2 = 2 if weight2 > 2 + replace weight2 = 1/2 if weight2 < 1/2 + + gen weight3 = weight + replace weight3 = 3 if weight3 > 3 + replace weight3 = 1/3 if weight3 < 1/3 + + gen weight4 = weight + replace weight4 = 4 if weight4 > 4 + replace weight4 = 1/4 if weight4 < 1/4 + + gen weight5 = weight + replace weight5 = 5 if weight5 > 5 + replace weight5 = 1/5 if weight5 < 1/5 + + estpost tabstat `varset' [weight=weight2], statistics(mean) columns(statistics) + est store sample_col_w2 + + estpost tabstat `varset' [weight=weight3], statistics(mean) columns(statistics) + est store sample_col_w3 + + estpost tabstat `varset' [weight=weight4], statistics(mean) columns(statistics) + est store sample_col_w4 + + estpost tabstat `varset' [weight=weight5], statistics(mean) columns(statistics) + est store sample_col_w5 + + * US demographics + replace balance_income = `income' + replace balance_college = `college' + replace balance_male = `male' + replace balance_white = `white' + replace balance_age = `age' + replace PD_P1_Usage = . + replace PD_P1_UsageFITSBY = . + + estpost tabstat `varset', statistics(mean) columns(statistics) + est store us_col + + * Restore data + restore + + * Export table + esttab us_col sample_col sample_col_w2 sample_col_w3 sample_col_w4 sample_col_w5 /// + using "output/sample_demographics_balance_all.tex", /// + mtitle("\shortstack{U.S.\\adults}" /// + "\shortstack{Analysis\\sample}" /// + "\shortstack{(w=2)}" /// + "\shortstack{(w=3)}" /// + "\shortstack{(w=4)}" /// + "\shortstack{(w=5)}" /// + ) /// + coeflabels(balance_income "Income (\\$000s)" /// + balance_college "College" /// + balance_male "Male" /// + balance_white "White" /// + balance_age "Age" /// + PD_P1_Usage "Period 1 use (min/day)" /// + PD_P1_UsageFITSBY "Period 1 FITSBY use (min/day)") /// + $DESCRIPTIVE_TAB /// + cells(mean(fmt(%9.1fc %9.2fc %9.2fc %9.2fc %9.1fc %9.1fc %9.1fc))) + + est clear +end + + +program limit_attrition + local varset /// + S3_Finished /// + S4_Finished /// + I_P2_Usage /// + I_P3_Usage /// + I_P4_Usage /// + I_P5_Usage + + * Preserve data + preserve + + * Use old sample definition + use "input/final_data.dta", clear + keep if S2_RevealConfirm == 1 & S3_Bonus <= 1 + foreach time in S3 S4 { + replace `time'_Finished = 0 if `time'_Finished == . + } + + * Create usage indicators + foreach time in P2 P3 P4 P5 { + gen I_`time'_Usage = 0 + replace I_`time'_Usage = 1 if PD_`time'_Usage != . + } + + * Attrition by limit group + forvalues i = 0/5 { + local if if S2_LimitType == `i' + estpost tabstat `varset' `if', statistics(mean) columns(statistics) + est store attrition_b`i' + } + + * Attrition for limit groups + local if if S2_LimitType != 0 + estpost tabstat `varset' `if', statistics(mean) columns(statistics) + est store attrition_b + + * F-test for limit groups + foreach var of varlist `varset' { + reg `var' i.S2_LimitType + local fvalue = Ftail(e(df_m), e(df_r), e(F)) + replace `var' = `fvalue' + } + estpost tabstat `varset', statistics(mean) columns(statistics) + est store fval_b + + * Export limit attrition table + esttab attrition_b0 attrition_b /// + attrition_b1 attrition_b2 /// + attrition_b3 attrition_b4 /// + attrition_b5 fval_b /// + using "output/attrition_limit.tex", /// + mtitle("\shortstack{Control}" /// + "\shortstack{All\\limits}" /// + "\shortstack{Snooze\\0}" /// + "\shortstack{Snooze\\2}" /// + "\shortstack{Snooze\\5}" /// + "\shortstack{Snooze\\20}" /// + "\shortstack{No\\snooze}" /// + "\shortstack{F-test\\p-value}") /// + coeflabels(S3_Finished "Completed survey 3" /// + S4_Finished "Completed survey 4" /// + I_P2_Usage "Have period 2 usage" /// + I_P3_Usage "Have period 3 usage" /// + I_P4_Usage "Have period 4 usage" /// + I_P5_Usage "Have period 5 usage") /// + $DESCRIPTIVE_TAB /// + cells(mean(fmt(%9.2fc))) + + est clear + + * Restore data + restore +end + +program bonus_attrition + local varset /// + S3_Finished /// + S4_Finished /// + I_P2_Usage /// + I_P3_Usage /// + I_P4_Usage /// + I_P5_Usage + + * Preserve data + preserve + + * Use old sample definition + use "input/final_data.dta", clear + keep if S2_RevealConfirm == 1 & S3_Bonus <= 1 + foreach time in S3 S4 { + replace `time'_Finished = 0 if `time'_Finished == . + } + + keep if S3_Bonus != 2 + + * Create usage indicators + foreach time in P2 P3 P4 P5 { + gen I_`time'_Usage = 0 + replace I_`time'_Usage = 1 if PD_`time'_Usage != . + } + + * Attrition by bonus group + forvalues i = 0 / 1 { + local if if S3_Bonus == `i' + estpost tabstat `varset' `if', statistics(mean) columns(statistics) + est store attrition_bonus`i' + } + + * T-test for bonus groups + foreach var of varlist `varset' { + capture prtest `var', by(S3_Bonus) + + if _rc == 0 { + local diff = -1 * r(P_diff) + local pval = r(p) + gen `var'_d = `diff' + gen `var'_p = `pval' + } + else { + gen `var'_d = 0 + gen `var'_p = . + } + } + + * Append bonus differences + foreach var of varlist `varset' { + replace `var' = `var'_d + } + estpost tabstat `varset', statistics(mean) columns(statistics) + est store diff_bonus + + * Append bonus p-values + foreach var of varlist `varset' { + replace `var' = `var'_p + } + estpost tabstat `varset', statistics(mean) columns(statistics) + est store pval_bonus + + display("here") + + * Export Bonus attrition table + esttab attrition_bonus0 attrition_bonus1 pval_bonus using "output/attrition_bonus.tex", /// + mtitle("\shortstack{Control}" /// + "\shortstack{Treatment}" /// + "\shortstack{t-test\\p-value}") /// + coeflabels(S3_Finished "Completed survey 3" /// + S4_Finished "Completed survey 4" /// + I_P2_Usage "Have period 2 usage" /// + I_P3_Usage "Have period 3 usage" /// + I_P4_Usage "Have period 4 usage" /// + I_P5_Usage "Have period 5 usage") /// + $DESCRIPTIVE_TAB /// + cells(mean(fmt(%9.2fc))) + + est clear + + * Restore data + restore +end + +program balance + local varset income college male white S0_Age PD_P1_UsageFITSBY + + iebaltab_edit `varset', /// + grpvar(limit_T) /// + savetex("output/balance_limit.tex") /// + $BALANCE_TAB + + iebaltab_edit `varset', /// + grpvar(S3_Bonus) /// + savetex("output/balance_bonus.tex") /// + $BALANCE_TAB + + * panelcombine, /// + * use(output/balance_limit.tex /// + * output/balance_bonus.tex) /// + * paneltitles("Limit Treatment" /// + * "Bonus Treatment") /// + * columncount(4) /// + * save("output/balance.tex") cleanup +end + +program historical_use + syntax, [fitsby] + + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + local word "FITSBY" + } + + else { + local fitsby "" + local suffix "" + local word "phone" + } + + local var PD_P1_Usage`fitsby' + label var PD_P1_Usage`fitsby' "Period 1 `word' use (minutes/day)" + + local label : var label `var' + sum `var', d + + twoway histogram `var', frac /// + ytitle("Fraction of sample" " ") /// + xtitle(" " "`label'") /// + $HIST_CONTINUOUS_SETTINGS + + graph export "output/hist_baseline_usage`suffix'.pdf", replace +end + +program summary_welfare + local varset /// + S1_PhoneUseChange /// + S1_AddictionIndex /// + S1_SMSIndex /// + S1_LifeBetter /// + S1_SWBIndex + + + estpost tabstat `varset', /// + statistics(mean, sd, max, min) columns(statistics) + + est store baseline + + esttab baseline using "output/baseline_welfare.tex", /// + $DESCRIPTIVE_TAB_DETAILED /// + coeflabels(S1_PhoneUseChange "Ideal use change" /// + S1_AddictionIndex "Addiction scale x (-1)" /// + S1_SMSIndex "SMS addiction scale x (-1)" /// + S1_LifeBetter "Phone makes life better" /// + S1_SWBIndex "Subjective well-being") +end + +program share_use_by_app + * Preserve data + preserve + + * Reshape data + keep UserID PD_P1_Usage_* PD_P1_Installed_* + drop *Other *_H* + reshape long PD_P1_Usage_ PD_P1_Installed_ , i(UserID) j(app) s + replace PD_P1_Usage_ = 0 if PD_P1_Usage_ == . + + * Collapse data + collapse (mean) PD_P1_Usage_ PD_P1_Installed_, by(app) + gsort -PD_P1_Usage_ + gen order = _n + + cap drop appname1 appname2 + gen appname1 = _n - 0.2 + gen appname2 = _n + 0.2 + + local N = _N + forvalues i = 1/`N' { + local t`i' = app[`i'] + } + + * Plot data + twoway bar PD_P1_Installed_ appname1, /// + fintensity(inten50) barw(0.35) /// + yaxis(1) yscale(axis(1) range(0)) ylabel(0(0.2)1, axis(1)) /// + xlabel(1 "`t1'" 2 "`t2'" 3 "`t3'" 4 "`t4'" 5 "`t5'" /// + 6 "`t6'" 7 "`t7'" 8 "`t8'" 9 "`t9'" 10 "`t10'" /// + 11 "`t11'" 12 "`t12'" 13 "`t13'" 14 "`t14'", /// + valuelabel angle(45)) || /// + bar PD_P1_Usage_ appname2, /// + fintensity(inten100) barw(0.35) /// + yaxis(2) yscale(axis(2) range(0)) ylabel(#5, axis(2)) /// + xtitle("") ytitle("Share of users", axis(1)) ytitle("Minutes/day", axis(2)) /// + legend(label(1 "Users at baseline") /// + label(2 "Period 1 use") /// + $BAR_SETTINGS + + graph export "output/bar_share_use_by_app.pdf", replace + + * Restore data + restore +end + +program addiction_plot + * Preserve data + preserve + + * Reshape data + keep UserID *_Addiction_* + keep if S3_Addiction_1 != . + + foreach i in 3 { + forvalues j = 1/16 { + gen S`i'_Addiction_Binary_`j' = S`i'_Addiction_`j' > 0.5 + + } + } + + keep UserID S3_Addiction_Binary_* + + reshape long S3_Addiction_Binary_ , i(UserID) j(question) + + rename S3_Addiction_Binary_ S3_Addiction + + * Collapse data + collapse (mean) S3_Addiction , by(question) + + gen order = _n + + cap drop qname + gen qname = _N - _n + 1 + + gen category = qname < 9 + + + * Plot data + twoway bar S3_Addiction qname, /// + fintensity(inten100) barw(0.6) bcolor(maroon) /// + yaxis(1) yscale(axis(1) range(0)) xlabel(0(0.2)1, axis(1)) /// + ylabel(1 "Procrastinate by using phone" 2 "Prefer phone to human interaction" /// + 3 "Lose sleep from use" 4 "Harms school/work performance" /// + 5 "Annoyed at interruption in use" 6 "Difficult to put down phone" /// + 7 "Feel anxious without phone" 8 "Others are concerned about use" /// + 9 "Try and fail to reduce use" 10 "Use to relax to go to sleep" /// + 11 "Use to distract from anxiety/etc." 12 "Use to distract from personal issues" /// + 13 "Tell yourself just a few more minutes" 14 "Use longer than intended" /// + 15 "Wake up, check phone immediately" 16 "Fear missing out online", /// + valuelabel angle(0)) horizontal /// + ytitle(" relapse, withdrawal, conflict salience, tolerance, mood", size(small)) /// + xtitle(`"Share of people who "often" or "always""', axis(1)) /// + legend(label(1 "Survey 3") /// + $BAR_SETTINGS + + graph export "output/addiction.pdf", replace + + + * Plot data + twoway bar S3_Addiction qname, /// + fintensity(inten100) barw(0.75) bcolor(maroon) /// + yaxis(1) yscale(axis(1) range(0)) xlabel(0(0.2)0.8, axis(1)) /// + xlabel(, labsize(large)) /// + ylabel(1 "Procrastinate by using phone" 2 "Prefer phone to human interaction" /// + 3 "Lose sleep from use" 4 "Harms school/work performance" /// + 5 "Annoyed at interruption in use" 6 "Difficult to put down phone" /// + 7 "Feel anxious without phone" 8 "Others are concerned about use" /// + 9 "Try and fail to reduce use" 10 "Use to relax to go to sleep" /// + 11 "Use to distract from anxiety/etc." 12 "Use to distract from personal issues" /// + 13 "Tell yourself just a few more minutes" 14 "Use longer than intended" /// + 15 "Wake up, check phone immediately" 16 "Fear missing out online", /// + valuelabel angle(0) labsize(large)) horizontal /// + ytitle(, size(zero)) /// + xtitle(`"Share of people who "often" or "always""', axis(1) justification(right) size(large)) /// + legend(label(1 "Survey 3") /// + region(lcolor(white))) graphregion(color(white)) /// + xsize(6.5) ysize(4.5 ) + + graph export "output/addiction_large.pdf", replace +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/descriptive/code/HeatmapPlots.R b/17/replication_package/code/analysis/descriptive/code/HeatmapPlots.R new file mode 100644 index 0000000000000000000000000000000000000000..c9074d2f38214b8ea78dcde85a2e7c3bf6a45839 --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/HeatmapPlots.R @@ -0,0 +1,144 @@ +library(ggplot2) +library(tidyverse) +library(haven) + +maroon <- '#94343c' +grey <- '#848484' + +low_grey <- "grey90" + +plot_wtp_prediction <- function(df){ + + # Tally the bins. Create bins centered at 5, 15, 25, etc. + counted <- df %>% + mutate(S2_PredictUseBonusEarnBin = S2_PredictUseBonusEarn - (S2_PredictUseBonusEarn %% 10) + 5) %>% + select(UserID, S2_PredictUseBonusEarnBin, S2_MPL) %>% + group_by(S2_MPL, S2_PredictUseBonusEarnBin) %>% + count(name="Count") + + # Create an empty dataframe of all of the index combinations + mpls <- unique(counted$S2_MPL) + pred <- unique(counted$S2_PredictUseBonusEarnBin) + + S2_MPL <- rep(mpls, length(pred)) + S2_PredictUseBonusEarnBin <- rep(pred, each=length(mpls)) + + empty <- data.frame(S2_MPL, S2_PredictUseBonusEarnBin) + + # replaces the non-missing + full <- empty %>% + left_join(counted, by= c('S2_MPL', 'S2_PredictUseBonusEarnBin')) %>% + mutate(Count=ifelse(is.na(Count), 0, Count)) + + #plots + a <- full %>% + ggplot(aes(S2_MPL, S2_PredictUseBonusEarnBin, fill= Count)) + + geom_tile() + + scale_fill_gradient(low = low_grey, high = maroon) + + theme_classic() + + labs(x= "Valuation of bonus ($)", y = "Predicted earnings from bonus ($)") + + geom_abline(intercept = 0, slope=1) + + ggsave('output/heatmap_wtp_prediction.pdf', plot=a, width=6.5, height=4.5, units="in") +} + +plot_predicted_actual <- function(df, period){ + bin_size <- 20 + + # filter to just control + data <- df %>% + filter(B == 0 & L == 0) + + #rename + data %<>% mutate(Predicted = !!sym(paste0('S', period, '_PredictUseNext_1'))) %>% + mutate(Actual = !!sym(paste0('PD_P', period, '_UsageFITSBY'))) %>% + filter(!is.na(Predicted) & !is.na(Actual)) + + counts <- data %>% + mutate(PredictedBin = Predicted - (Predicted %% bin_size) + (bin_size/2)) %>% + mutate(ActualBin = Actual - (Actual %% bin_size) + (bin_size/2)) %>% + select(PredictedBin, ActualBin) %>% + group_by(PredictedBin, ActualBin) %>% + count(name="Count") + + #plots + a <- counts %>% + ggplot(aes(PredictedBin, ActualBin, fill= Count)) + + geom_tile() + + scale_fill_gradient(low = low_grey, high = maroon) + + theme_classic() + + labs(x= "Predicted FITSBY use (minutes/day)", y = "Actual FITSBY use (minutes/day)") + + geom_abline(intercept = 0, slope=1) + + xlim(0, 500) + ylim(0, 500) + + ggsave(sprintf('output/heatmap_usage_P%s.pdf', period), plot=a, width=6.5, height=4.5, units="in") + +} + +plot_predicted_actual_all <- function(df){ + bin_size <- 20 + + # filter to just control + data <- df %>% + filter(B == 0 & L == 0) + + #rename + p2 <- data %>% mutate(Predicted = S2_PredictUseNext_1) %>% + mutate(Actual = PD_P2_UsageFITSBY) %>% + filter(!is.na(Predicted) & !is.na(Actual)) %>% + select(Predicted, Actual) + + p3 <- data %>% mutate(Predicted = S3_PredictUseNext_1) %>% + mutate(Actual = PD_P3_UsageFITSBY) %>% + filter(!is.na(Predicted) & !is.na(Actual)) %>% + select(Predicted, Actual) + + p4 <- data %>% mutate(Predicted = S4_PredictUseNext_1) %>% + mutate(Actual = PD_P4_UsageFITSBY) %>% + filter(!is.na(Predicted) & !is.na(Actual)) %>% + select(Predicted, Actual) + + all_periods <- rbind(p2, p3, p4) + + counts <- all_periods %>% + mutate(PredictedBin = Predicted - (Predicted %% bin_size) + (bin_size/2)) %>% + mutate(ActualBin = Actual - (Actual %% bin_size) + (bin_size/2)) %>% + select(PredictedBin, ActualBin) %>% + group_by(PredictedBin, ActualBin) %>% + count(name="Count") + + #plots + a <- counts %>% + ggplot(aes(PredictedBin, ActualBin, fill= Count)) + + geom_tile() + + scale_fill_gradient(low = low_grey, high = maroon) + + theme_classic() + + labs(x= "Predicted FITSBY use (minutes/day)", y = "Actual FITSBY use (minutes/day)") + + geom_abline(intercept = 0, slope=1) + + xlim(0, 500) + ylim(0, 500) + + ggsave('output/heatmap_usage.pdf', plot=a, width=6.5, height=4.5, units="in") + +} + +main <- function(){ + df <- read_dta('input/final_data_sample.dta') + + # clean data + df %<>% + mutate(L = ifelse(S2_LimitType != 0, 1, 0)) %>% + mutate(B = ifelse(S3_Bonus == 1, 1, 0)) %>% + mutate(S = as.character(Stratifier)) + + plot_wtp_prediction(df) + + plot_predicted_actual(df, 2) + plot_predicted_actual(df, 3) + plot_predicted_actual(df, 4) + + plot_predicted_actual_all(df) + +} + + +main() diff --git a/17/replication_package/code/analysis/descriptive/code/QualitativeEvidence.do b/17/replication_package/code/analysis/descriptive/code/QualitativeEvidence.do new file mode 100644 index 0000000000000000000000000000000000000000..812e3570aca90ce5a7b61df49302dc8c11a84182 --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/QualitativeEvidence.do @@ -0,0 +1,152 @@ +// Baseline qualitative evidence + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" +end + +program define_plot_settings + global HIST_SETTINGS /// + xlabel(, labsize(large)) /// + ylabel(, labsize(large)) /// + ytitle("Fraction of sample" " ", size(large)) /// + bcolor(maroon) graphregion(color(white)) /// + xsize(6.5) ysize(4.5) + + global HIST_DISCRETE_SETTINGS /// + gap(50) xlabel(, valuelabel noticks) /// + $HIST_SETTINGS + + global HIST_CONTINUOUS_SETTINGS /// + $HIST_SETTINGS + + global CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(#6) /// + xsize(6.5) ysize(4.5) + + global CISPIKE_SETTINGS /// + spikecolor(maroon black gray) /// + cicolor(maroon black gray) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + plot_self_control + plot_self_control_by_age +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program plot_self_control + twoway hist S1_InterestInLimits, frac discrete /// + $HIST_DISCRETE_SETTINGS /// + xtitle(" " "Interest in limits", size(large)) + + graph export "output/hist_limits_interest.pdf", replace + + twoway hist S1_PhoneUseChange, frac /// + $HIST_CONTINUOUS_SETTINGS /// + width(5) start(-102.5) /// + xtitle(" " "Ideal use change (percent)", size(large)) + + graph export "output/hist_phone_use.pdf", replace + + twoway hist S1_LifeBetter, frac discrete /// + $HIST_CONTINUOUS_SETTINGS /// + xtitle(" " "Phone use makes life worse (left) or better (right)", size(large)) /// + xtick(-5(2.5)5) xlabel(-5(5)5) + + graph export "output/hist_life_betterworse.pdf", replace + + hist S1_AddictionIndex, frac /// + $HIST_CONTINUOUS_SETTINGS /// + xtitle(" " "Addiction scale", size(large)) + + graph export "output/hist_addiction_index.pdf", replace + + + hist S1_SMSIndex, frac /// + $HIST_CONTINUOUS_SETTINGS /// + xtitle(" " "SMS addiction scale", size(large)) + + graph export "output/hist_sms_index.pdf", replace + +end + +program plot_self_control_by_age + * Preserve data + preserve + + * Reshape data + keep UserID AgeGroup PD_P1_UsageFITSBY Strat*Index + rename_but, varlist(UserID AgeGroup) prefix(index) + reshape long index, i(UserID AgeGroup) j(measure) string + + * Recode data + encode measure, generate(measure_encode) + + recode measure_encode /// + (2 = 1 "Addiction index") /// + (3 = 2 "Restriction index") /// + (1 = 3 "Period 1 FITSBY Usage"), /// + gen(measure_recode) + + * Define plot settings + + // - When creating multiple y-axis plots, Stata unfortunately makes no + // attempt to align the different y-axes + // - Manually adjust the follaowing options to properly align the y-axes + // - Note that values for legend order are also manually specified + // (but do not need to be adjusted) as including multiple y-axes jumbles + // the legend order expected by the cispike command + local ylabel1 -.4(.2).6 + local ylabel2 100(10)200 + local yrange2 range(100, 200) + + * Plot data + + cispike index, /// + over1(measure_recode) over2(AgeGroup) /// + $CISPIKE_SETTINGS /// + spike( yaxis(1) || yaxis(1) || yaxis(2)) ci( yaxis(1) || yaxis(1) || yaxis(2)) /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Standard deviations" " ", axis(1)) /// + ytitle(" " "Usage (minutes/day)", axis(2)) /// + ylabel(`ylabel1', axis(1)) /// + ylabel(`ylabel2', axis(2)) /// + yscale(`yrange2' axis(2)) /// + legend(order(11 "Addiction index" /// + 16 "Restriction index" /// + 26 "Period 1 FITSBY Usage"))) + + graph export "output/cispike_self_control_index_by_age.pdf", replace + + * Restore data + restore +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/descriptive/code/SampleStatistics.do b/17/replication_package/code/analysis/descriptive/code/SampleStatistics.do new file mode 100644 index 0000000000000000000000000000000000000000..ee246baf21b850cf2158a95c3f97d1085c9dcea9 --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/SampleStatistics.do @@ -0,0 +1,138 @@ +// Sample statistics + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" +end + +program latex + syntax, name(str) value(str) + + local command = "\newcommand{\\`name'}{`value'}" + + file open scalars using "output/scalars.tex", write append + file write scalars `"`command'"' _n + file close scalars +end + +program latex_integer + syntax, name(str) value(str) + + local value : display %8.0gc `value' + local value = trim("`value'") + + latex, name(`name') value(`value') +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + import_data + + get_samples +end + +program import_data + use "input/final_data.dta", clear +end + +program get_samples + cap sencode UserID, replace + + * Shown ad + latex, name(shownad) value("3,271,165") + + * Clicked on ad + sum UserID if S0_Finished != . + latex_integer, name(clickedonad) value(`r(N)') + + * Passed pre-screen + sum UserID if S0_Android == 1 & S0_Country == 1 & S0_Age >= 18 & S0_Age < 65 & /// + S0_PhoneCount == 1 & S0_Android == 1 + latex_integer, name(passedprescreen) value(`r(N)') + + * Consented + sum UserID if S0_Consent == 1 + latex_integer, name(consented) value(`r(N)') + + * Finished intake + sum UserID if S0_Finished == 1 & S0_Consent == 1 + latex_integer, name(finishedintake) value(`r(N)') + + * Began baseline + sum UserID if S1_Finished != . + latex_integer, name(beganbaseline) value(`r(N)') + + * Finished baseline + sum UserID if S1_Finished == 1 + latex_integer, name(finishedbaseline) value(`r(N)') + local finishedbaseline `r(N)' + + * Randomized + sum UserID if S1_Finished == 1 & Randomize == 1 + latex_integer, name(randomized) value(`r(N)') + local randomized `r(N)' + + * Dropped from baseline + local dropped = `finishedbaseline' - `randomized' + latex_integer, name(droppedbaseline) value(`dropped') + + * Began midline + sum UserID if S2_Finished != . + latex_integer, name(beganmidline) value(`r(N)') + + * Informed of treatment + sum UserID if S2_RevealConfirm == 1 + latex_integer, name(informedtreat) value(`r(N)') + + * Finished midline + sum UserID if S2_Finished == 1 & S2_RevealConfirm == 1 + latex_integer, name(finishedmidline) value(`r(N)') + + * Began endline + sum UserID if S3_Finished != . + latex_integer, name(beganendline) value(`r(N)') + + * Finished endline + sum UserID if S3_Finished == 1 + latex_integer, name(finishedendline) value(`r(N)') + + * Began post-endline + sum UserID if S4_Finished != . + latex_integer, name(beganpostendline) value(`r(N)') + + * Finished endline + sum UserID if S4_Finished == 1 + latex_integer, name(finishedpostendline) value(`r(N)') + + sum UserID if S4_Finished == 1 & PD_P5_Usage != . + latex_integer, name(kepttoend) value(`r(N)') + + * Analytical sizes + sum UserID if S2_RevealConfirm == 1 & S3_Bonus <= 1 + latex_integer, name(informedtreatanalysis) value(`r(N)') + + sum UserID if S2_RevealConfirm == 1 & S3_Bonus <= 1 & PD_P5_Usage != . & S4_Finished == 1 + latex_integer, name(kepttoendanalysis) value(`r(N)') + +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/descriptive/code/Scalars.do b/17/replication_package/code/analysis/descriptive/code/Scalars.do new file mode 100644 index 0000000000000000000000000000000000000000..ea78671a255419b2d1c78e64e8d48b819b18f84b --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/Scalars.do @@ -0,0 +1,625 @@ +// Ad hoc scalars for text of main paper + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" + yaml global STRATA = YAML.metadata.strata +end + +program latex + syntax, name(str) value(str) + + local command = "\newcommand{\\`name'}{`value'}" + + file open scalars using "output/scalars.tex", write append + file write scalars `"`command'"' _n + file close scalars +end + +program latex_rounded + syntax, name(str) value(str) digits(str) + + local value : display %8.`digits'fc `value' + local value = trim("`value'") + + latex, name(`name') value(`value') +end + +program latex_precision + syntax, name(str) value(str) digits(str) + + autofmt, input(`value') dec(`digits') strict + local value = r(output1) + + latex, name(`name') value(`value') +end + +program reshape_swb + * Reshape wide to long + keep UserID S3_Bonus S2_LimitType Stratifier S*_SWBIndex_N + + local indep UserID S3_Bonus S2_LimitType Stratifier S1_* + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(`indep') j(measure) string + + split measure, p(_) + replace measure = measure2 + "_" + measure3 + rename measure1 survey + drop measure2 measure3 + + * Reshape long to wide + reshape wide outcome, i(UserID survey) j(measure) string + rename outcome* * + + * Recode data + encode survey, gen(S) + + * Label data + label var SWBIndex "Subjective well-being" +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + import_sample_data + + get_usage_info_open + get_percent_fitsby + get_percent_limit + get_ideal_use + get_life_worse + get_addict + get_bonus_effect + get_limit_effect + get_valuations + get_baseline_usage + get_compare2019 + get_substitution + get_swb_pvalues + get_bonus_desire + get_pd_usage + get_medians + get_bound_use + + + * import_data + * get_other_blocker_use + +end + +program import_sample_data + use "input/final_data_sample.dta", clear +end + +program import_data + use "input/final_data.dta", clear +end + +program tab_percent + syntax, var(str) key(str) name(str) digits(str) + + * Generate dummy + cap drop dummy + gen dummy = 0 + replace dummy = 1 if inlist(`var', `key') + + * Tabulate dummy + sum dummy + local perc = `r(mean)' * 100 + latex_rounded, name(`name') value(`perc') digits(`digits') +end + +program get_usage_info_open + latex, name(usageinfoopen) value("XXX") // WIP +end + +program get_percent_fitsby + * Preserve data + preserve + + * Reshape data + keep UserID PD_*_Usage_* PD_*_Installed_* + keep UserID *Facebook *Instagram *Twitter *Snapchat *Browser *YouTube + rename_but, varlist(UserID) prefix(use) + reshape long use, i(UserID) j(j) string + + split j, p(_) + rename j4 app + + * Get apps used + collapse (sum) use, by(UserID app) + replace use = 1 if use > 0 & use != . + + * Get number of apps used + collapse (sum) use, by(UserID) + + * Get percent all apps used + tab_percent, /// + var(use) key(6) /// + name(percentfitsby) digits(1) + + * Restore data + restore +end + +program get_percent_limit + * Get percent moderately or very interested + tab_percent, /// + var(S1_InterestInLimits) key(3, 4) /// + name(percentlimitinterested) digits(0) + + * Get percent not at all interested + tab_percent, /// + var(S1_InterestInLimits) key(1) /// + name(percentlimitnot) digits(0) +end + +program get_ideal_use + * Get percent just right + tab_percent, /// + var(S1_PhoneUseFeel) key(2) /// + name(percentuseright) digits(0) + + * Get percent too little + tab_percent, /// + var(S1_PhoneUseFeel) key(3) /// + name(percentuselittle) digits(1) + + * Get mean total ideal reduction + sum S1_PhoneUseReduce + local mean = r(mean) + latex_rounded, name(idealreduction) value(`mean') digits(0) + + * Get mean Facebook ideal reduction + recode S1_IdealApp_Facebook /// + (1 = -75 ) /// + (2 = -37.5) /// + (3 = -12.5) /// + (4 = 0 ) /// + (5 = 12.5) /// + (6 = 37.5) /// + (7 = 75 ) /// + (8 = 0 ), /// + gen(S1_IdealApp_Facebook_recode) + + sum S1_IdealApp_Facebook_recode + local mean = r(mean) * -1 + latex_rounded, name(idealreductionfacebook) value(`mean') digits(0) + + * Get mean Instagram ideal reduction + recode S1_IdealApp_Instagram /// + (1 = -75 ) /// + (2 = -37.5) /// + (3 = -12.5) /// + (4 = 0 ) /// + (5 = 12.5) /// + (6 = 37.5) /// + (7 = 75 ) /// + (8 = 0 ), /// + gen(S1_IdealApp_Instagram_recode) + + sum S1_IdealApp_Instagram_recode + local mean = r(mean) * -1 + latex_rounded, name(idealreductioninsta) value(`mean') digits(0) + + * Get mean Twitter ideal reduction + recode S1_IdealApp_Twitter /// + (1 = -75 ) /// + (2 = -37.5) /// + (3 = -12.5) /// + (4 = 0 ) /// + (5 = 12.5) /// + (6 = 37.5) /// + (7 = 75 ) /// + (8 = 0 ), /// + gen(S1_IdealApp_Twitter_recode) + + sum S1_IdealApp_Twitter_recode + local mean = r(mean) * -1 + latex_rounded, name(idealreductiontwitter) value(`mean') digits(0) + + * Get mean Snapchat ideal reduction + recode S1_IdealApp_Snapchat /// + (1 = -75 ) /// + (2 = -37.5) /// + (3 = -12.5) /// + (4 = 0 ) /// + (5 = 12.5) /// + (6 = 37.5) /// + (7 = 75 ) /// + (8 = 0 ), /// + gen(S1_IdealApp_Snapchat_recode) + + sum S1_IdealApp_Snapchat_recode + local mean = r(mean) * -1 + latex_rounded, name(idealreductionsnap) value(`mean') digits(0) + + * Get mean Browser ideal reduction + recode S1_IdealApp_Browser /// + (1 = -75 ) /// + (2 = -37.5) /// + (3 = -12.5) /// + (4 = 0 ) /// + (5 = 12.5) /// + (6 = 37.5) /// + (7 = 75 ) /// + (8 = 0 ), /// + gen(S1_IdealApp_Browser_recode) + + sum S1_IdealApp_Browser_recode + local mean = r(mean) * -1 + latex_rounded, name(idealreductionbrowser) value(`mean') digits(0) + + * Get mean YouTube ideal reduction + recode S1_IdealApp_YouTube /// + (1 = -75 ) /// + (2 = -37.5) /// + (3 = -12.5) /// + (4 = 0 ) /// + (5 = 12.5) /// + (6 = 37.5) /// + (7 = 75 ) /// + (8 = 0 ), /// + gen(S1_IdealApp_YouTube_recode) + + sum S1_IdealApp_YouTube_recode + local mean = r(mean) * -1 + latex_rounded, name(idealreductionyoutube) value(`mean') digits(0) + +end + +program get_life_worse + * Get percent life worse + tab_percent, /// + var(S1_LifeBetter) key(-5, -4, -3, -2, -1) /// + name(percentlifeworse) digits(0) +end + +program get_addict + * Get mean addiction index + sum S1_AddictionIndex + local mean = r(mean) * -1 + latex_rounded, name(scaleaddict) value(`mean') digits(1) +end + +program get_bonus_effect + preserve + local baseline PD_P1_UsageFITSBY + local yvar PD_P2_UsageFITSBY + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + local treatment = -_b[B] + latex_precision, name(bonustwo) value(`treatment') digits(2) + + local baseline PD_P1_UsageFITSBY + local yvar PD_P3_UsageFITSBY + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + local treatment = -_b[B] + latex_precision, name(bonusthree) value(`treatment') digits(2) + + sum PD_P3_UsageFITSBY if B == 0 & L == 0 + local reduction = (`treatment'/r(mean))*100 + latex_precision, name(bonusthreepct) value(`reduction') digits(2) + + local baseline PD_P1_UsageFITSBY + local yvar PD_P4_UsageFITSBY + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + local treatment4 = -_b[B] + latex_precision, name(bonusfour) value(`treatment4') digits(2) + + local baseline PD_P1_UsageFITSBY + local yvar PD_P5_UsageFITSBY + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + local treatment5 = -_b[B] + latex_precision, name(bonusfive) value(`treatment5') digits(2) + restore +end + +program get_limit_effect + preserve + + local baseline PD_P1_UsageFITSBY + local yvar PD_P5432_UsageFITSBY + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + local treatment = -_b[L] + latex_precision, name(limiteffectstataadhoc) value(`treatment') digits(2) + + sum PD_P5432_UsageFITSBY if B == 0 & L == 0 + local reduction = (`treatment'/r(mean))*100 + + latex_precision, name(limiteffectpct) value(`reduction') digits(2) + restore +end + +program get_valuations + preserve + + sum S2_MPL + local vb = r(mean) + latex_precision, name(valuebonus) value(`vb') digits(2) + + sum S3_MPLLimit + local vl = r(mean) + local numlimit = r(N) + latex_precision, name(valuelimit) value(`vl') digits(3) + + + sum S3_MPLLimit if S3_MPLLimit > 0 + local numpaylimit = r(N) + local positivelimit = (`numpaylimit' / `numlimit') * 100 + latex_precision, name(positivelimit) value(`positivelimit') digits(2) + + sum S3_MPLLimit if S3_MPLLimit > 10 + local numpayten = r(N) + local tenlimit = (`numpayten' / `numlimit') * 100 + latex_precision, name(tenlimit) value(`tenlimit') digits(2) + + restore +end + +program get_baseline_usage + preserve + + sum PD_P1_Usage + local avg_all = r(mean) + latex_precision, name(avgOverall) value(`avg_all') digits(2) + + sum PD_P1_UsageFITSBY + local avg_fitsby = r(mean) + latex_precision, name(avgFITSBY) value(`avg_fitsby') digits(2) + + local avg_fitsby_pct = (`avg_fitsby' / `avg_all') * 100 + latex_precision, name(avgFITSBYpct) value(`avg_fitsby_pct') digits(2) + + sum PD_P1_Usage_Facebook + local avg_fb = r(mean) + latex_precision, name(avgFB) value(`avg_fb') digits(2) + + sum PD_P1_Usage_Browser + local avg_br = r(mean) + latex_precision, name(avgBR) value(`avg_br') digits(2) + + sum PD_P1_Usage_YouTube + local avg_yt = r(mean) + latex_precision, name(avgYT) value(`avg_yt') digits(2) + + sum PD_P1_Usage_Instagram + local avg_in = r(mean) + latex_precision, name(avgIN) value(`avg_in') digits(2) + + sum PD_P1_Usage_Snapchat + local avg_sc = r(mean) + latex_precision, name(avgSC) value(`avg_sc') digits(2) + + sum PD_P1_Usage_Twitter + local avg_tw = r(mean) + latex_precision, name(avgTW) value(`avg_tw') digits(2) + + restore +end + +program get_compare2019 + preserve + + sum S1_CovidChangesFreeTime + local ss = r(N) + + sum S1_CovidChangesFreeTime if S1_CovidChangesFreeTime > 4 + local num_worse = r(N) + + local covidfree = 100 * `num_worse'/`ss' + latex_precision, name(covidmorefree) value(`covidfree') digits(2) + + recode S1_CovidChangeReason /// + (1 = 4 "Increased phone usage") /// + (2 = 4 "Increased phone usage") /// + (3 = 3 "No change") /// + (4 = 4 "Increased phone usage") /// + (5 = 2 "Decreased phone usage") /// + (6 = 1 "Other"), /// + gen(S1_CovidChangeReason_recode) + + sum S1_CovidChangesFreeTime + local ss2 = r(N) + + sum S1_CovidChangeReason_recode if S1_CovidChangeReason_recode == 4 + local num_more_phone = r(N) + + local morephoneuse = 100 * `num_more_phone'/`ss2' + latex_precision, name(morephoneuse) value(`morephoneuse') digits(2) + + restore +end + +program get_substitution + preserve + + gen_treatment, simple + reg_treatment, yvar(S4_Substitution_W) indep($STRATA) simple + + local bsub = -_b[B] + latex_precision, name(bonussubstitution) value(`bsub') digits(2) + + local lsub = _b[L] + latex_precision, name(limitsubstitution) value(`lsub') digits(2) + + gen avg_overall = (PD_P3_Usage + PD_P4_Usage + PD_P5_Usage)/3 + gen avg_fitsby = (PD_P3_UsageFITSBY + PD_P4_UsageFITSBY + PD_P5_UsageFITSBY)/3 + + gen avg_non_fitsby = avg_overall - avg_fitsby + reg_treatment, yvar(avg_non_fitsby) indep($STRATA) simple + + local bsub = -_b[B] + latex_precision, name(bonusnonfitsby) value(`bsub') digits(2) + + local lsub = _b[L] + latex_precision, name(limitnonfitsby) value(`lsub') digits(2) + restore +end + +program get_swb_pvalues + est clear + + * Preserve data + preserve + + * Reshape data + reshape_swb + + * Specify regression + local yvar SWBIndex_N + + * Run regressions + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, simple + cap drop B3 + cap drop B4 + gen B3 = B * (S == 1) + gen B4 = B * (S == 2) + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + reg `yvar' L B3 B4 `indep', robust cluster(UserID) + + local lprob = _P[L] + local lcoef = _b[L] + latex_precision, name(limitSWBpval) value(`lprob') digits(2) + latex_precision, name(limitSWBcoef) value(`lcoef') digits(1) + + local bprob = _P[B4] + local bcoef = _b[B4] + latex_precision, name(bonusSWBpval) value(`bprob') digits(2) + latex_precision, name(bonusSWBcoef) value(`bcoef') digits(1) + + * Restore data + restore +end + +program get_bonus_desire + sum S2_PredictUseInitial_W + local avg_prediction = r(mean) + latex_precision, name(MPLprediction) value(`avg_prediction') digits(2) + + sum S2_PredictUseBonus + local avg_reduction_pct = r(mean) + latex_precision, name(MPLreductionpct) value(`avg_reduction_pct') digits(2) + + gen reduction = S2_PredictUseInitial_W * (S2_PredictUseBonus / 100) + sum reduction + local avg_reduction_mins = r(mean) + latex_precision, name(MPLreductionmins) value(`avg_reduction_mins') digits(2) + + gen value = (reduction/60)*50 + sum value + local avg_bonus_earnings = r(mean) + latex_precision, name(MPLearnings) value(`avg_bonus_earnings') digits(2) + + sum S2_MPL + local avg_value_bonus = r(mean) + latex_precision, name(MPLvalue) value(`avg_value_bonus') digits(2) + + gen premium = S2_MPL - value + sum premium + local avg_premium = r(mean) + latex_precision, name(MPLpremium) value(`avg_premium') digits(2) + + sum S2_MPLReasoning + local total_respondents = r(N) + + sum S2_MPLReasoning if S2_MPLReasoning == 2 + local wish_reduce = r(N) + local wish_reduce_pct = (`wish_reduce' / `total_respondents') * 100 + latex_precision, name(MPLwishreduce) value(`wish_reduce_pct') digits(2) + + sum S2_MPLReasoning if S2_MPLReasoning == 1 + local maximize = r(N) + local maximize_pct = (`maximize' / `total_respondents') * 100 + latex_precision, name(MPLmaximize) value(`maximize_pct') digits(2) + + sum S2_MPLReasoning if S2_MPLReasoning == 3 + local no_pressure = r(N) + local no_pressure_pct = (`no_pressure' / `total_respondents') * 100 + latex_precision, name(MPLnopressure) value(`no_pressure_pct') digits(2) + + sum premium if S2_MPLReasoning == 2 + local premium_reduce = r(mean) + + sum premium if S2_MPLReasoning == 3 + local premium_no_pressure = r(mean) + + local premium_difference = `premium_reduce' - `premium_no_pressure' + latex_precision, name(MPLpremiumdifference) value(`premium_difference') digits(2) +end + +program get_pd_usage + gen_treatment, simple + + sum PD_P5432_UsageMinutesPD if B == 1 + local mins_bonus = r(mean) / 84 + latex_precision, name(BonusPDmins) value(`mins_bonus') digits(2) + + sum PD_P5432_UsageMinutesPD if L == 1 + local mins_limit = r(mean) / 84 + latex_precision, name(LimitPDmins) value(`mins_limit') digits(2) + + sum PD_P5432_UsageMinutesPD if B == 0 & L == 0 + local mins_control = r(mean) / 84 + latex_precision, name(ControlPDmins) value(`mins_control') digits(1) + +end + +program get_medians + sum S0_Age, detail + local med_age = r(p50) + latex_precision, name(MedianAge) value(`med_age') digits(2) + + sum PD_P1_UsageFITSBY, detail + local med_use = r(p50) + latex_precision, name(MedianFITSBYUsage) value(`med_use') digits(2) +end + +program get_bound_use + gen baseline = ceil(PD_P1_UsageFITSBY/60)*60 + gen exceeds = PD_P3_UsageFITSBY > baseline + sum exceeds if S3_Bonus == 1 + local pct_exceed = r(mean) * 100 + latex_precision, name(PercentExceedBonus) value(`pct_exceed') digits(2) + + gen huge_drop = PD_P3_UsageFITSBY < (baseline - 180) + sum huge_drop if S3_Bonus == 1 + local pct_huge_drop = r(mean) * 100 + latex_rounded, name(PercentBoundDrop) value(`pct_huge_drop') digits(0) +end + +program get_other_blocker_use + sum S1_OtherLimitUse if S1_Finished == 1 + local pct_other_blocker_use = r(mean) * 100 + latex_rounded, name(OtherBlockerUse) value(`pct_other_blocker_use') digits(0) +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/descriptive/code/Temptation.do b/17/replication_package/code/analysis/descriptive/code/Temptation.do new file mode 100644 index 0000000000000000000000000000000000000000..a2bdd91a4387ce084e039696d9f79099ccc6c8c4 --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/code/Temptation.do @@ -0,0 +1,100 @@ +// Figure 1 + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" + yaml global STRATA = YAML.metadata.strata + + global app_list Facebook Instagram Twitter Snapchat Browser YouTube Other +end + + + +********************** +* Analysis functions * +********************** + +program main + define_constants + import_data + plot_figure_1 +end + +program import_data + use "input/final_data_sample.dta", clear +end + + +program plot_figure_1 + * Preserve data + preserve + + * Drop unnecessary columns + keep UserID S4_Temptation_* + * Pivot the columns into a new variable + reshape long S4_ , i(UserID) j(control) string + * Assign values to too little (-1) the right amount (0) too much (1) + recode S4_ (1 = -1) (2 = 0 ) (3 = 1), gen(S4_N) + + * Relabel + replace control="Exercise" if control=="Temptation_1" + replace control="{bf:Use smartphone" if control=="Temptation_2" + replace control="Eat unhealthy food" if control=="Temptation_3" + replace control="{bf:Check email" if control=="Temptation_4" + replace control="{bf:Play video games" if control=="Temptation_5" + replace control="Watch TV" if control=="Temptation_6" + replace control="Work" if control=="Temptation_7" + replace control="{bf:Browse social media" if control=="Temptation_8" + replace control="Smoke cigarettes" if control=="Temptation_9" + replace control="{bf:Read online news" if control=="Temptation_10" + replace control="Drink alcohol" if control=="Temptation_11" + replace control="Sleep" if control=="Temptation_12" + replace control="Save for retirement" if control=="Temptation_13" + + * Collapse to values of interest + drop UserID + collapse (mean) S4_m = S4_N (semean) S4_se=S4_N (count) S4_count = S4_N, by(control) + * Change label for - values, take absolute, and sort. + replace control=control+" (-1)" if S4_m<0 + replace control=control+"}" if strpos(control,"bf")>0 + + replace S4_m=abs(S4_m) + gsort -S4_m + + * Create 95% CI bands + gen S4_m_lb = S4_m - 1.96*S4_se + gen S4_m_ub = S4_m + 1.96*S4_se + + * Plot + gen axis = _n + labmask axis, val(control) + + twoway (rcap S4_m_lb S4_m_ub axis, lcolor(maroon)) (scatter S4_m axis, msize(small)), /// + xlabel(1(1)13,valuelabel angle(45) labsize(small)) /// + ytitle("absolute value of" "(share “too much” – share “too little”)", /// + size(small)) xtitle("{&larr} more perceived self-control problems | less perceived self-control problems {&rarr}") legend(off) graphregion(color(white)) + + graph export "output/online_and_offline_temptation_scatter.pdf", replace + + * Restore data + restore + + +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/descriptive/input.txt b/17/replication_package/code/analysis/descriptive/input.txt new file mode 100644 index 0000000000000000000000000000000000000000..533a98f00574bbbd5f526ac13b140a75e75eeafd --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/input.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1aacb5e3a47846afaf251dbe069f5cee136e1255fe4ec43721a033fafc1d837d +size 812 diff --git a/17/replication_package/code/analysis/descriptive/make.py b/17/replication_package/code/analysis/descriptive/make.py new file mode 100644 index 0000000000000000000000000000000000000000..688853a2b769787105338a14d207da361e59489c --- /dev/null +++ b/17/replication_package/code/analysis/descriptive/make.py @@ -0,0 +1,75 @@ +################### +### ENVIRONMENT ### +################### +import git +import imp +import os + +### SET DEFAULT PATHS +ROOT = '../..' + +PATHS = { + 'root' : ROOT, + 'lib' : os.path.join(ROOT, 'lib'), + 'config' : os.path.join(ROOT, 'config.yaml'), + 'config_user' : os.path.join(ROOT, 'config_user.yaml'), + 'input_dir' : 'input', + 'external_dir' : 'external', + 'output_dir' : 'output', + 'output_local_dir' : 'output_local', + 'makelog' : 'log/make.log', + 'output_statslog' : 'log/output_stats.log', + 'source_maplog' : 'log/source_map.log', + 'source_statslog' : 'log/source_stats.log', +} + +### LOAD GSLAB MAKE +f, path, desc = imp.find_module('gslab_make', [PATHS['lib']]) +gs = imp.load_module('gslab_make', f, path, desc) + +### LOAD CONFIG USER +PATHS = gs.update_paths(PATHS) +gs.update_executables(PATHS) + +############ +### MAKE ### +############ + +### START MAKE +gs.remove_dir(['input', 'external']) +gs.clear_dir(['output', 'log', 'temp']) +gs.start_makelog(PATHS) + +### GET INPUT FILES +inputs = gs.link_inputs(PATHS, ['input.txt']) +# gs.write_source_logs(PATHS, inputs + externals) +# gs.get_modified_sources(PATHS, inputs + externals) + +### RUN SCRIPTS +""" +Critical +-------- +Many of the Stata analysis scripts recode variables using +the `recode` command. Double-check all `recode` commands +to confirm recoding is correct, especially when reusing +code for a different experiment version. +""" + +gs.run_stata(PATHS, program = 'code/Scalars.do') +#gs.run_stata(PATHS, program = 'code/SampleStatistics.do') +gs.run_stata(PATHS, program = 'code/DataDescriptives.do') +gs.run_stata(PATHS, program = 'code/QualitativeEvidence.do') +gs.run_stata(PATHS, program = 'code/CommitmentDemand.do') +gs.run_stata(PATHS, program = 'code/COVIDResponse.do') +gs.run_stata(PATHS, program = 'code/Temptation.do') + +gs.run_r(PATHS, program = 'code/HeatmapPlots.R') + +### LOG OUTPUTS +gs.log_files_in_output(PATHS) + +### CHECK FILE SIZES +#gs.check_module_size(PATHS) + +### END MAKE +gs.end_makelog(PATHS) diff --git a/17/replication_package/code/analysis/structural/.RData b/17/replication_package/code/analysis/structural/.RData new file mode 100644 index 0000000000000000000000000000000000000000..673f4eee46b3b3e20d7d8b831e75dd7bf51b812c --- /dev/null +++ b/17/replication_package/code/analysis/structural/.RData @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7292ca6438e8b7e80608a5015f136523305b10e1497fa96808904f0c51ab72cd +size 3785816 diff --git a/17/replication_package/code/analysis/structural/.Rhistory b/17/replication_package/code/analysis/structural/.Rhistory new file mode 100644 index 0000000000000000000000000000000000000000..3fdd0974ae6dd5ab21714b5b09f131394b5b99c4 --- /dev/null +++ b/17/replication_package/code/analysis/structural/.Rhistory @@ -0,0 +1,512 @@ +install.packages("optimx") +library("optimx") +library("stats") +library("tidyverse") +square_function <- function(x){ +y = x^2 +return(y) +} +square_function(x=3) +square_function(x=0) +intial_values <- c(-2,1,2) +minimise_function <- optimr(intial_values, square_function) +intial_values <- c(-2) +minimise_function <- optimr(intial_values, square_function) +minimise_function <- optimr(intial_values, square_function, method = "Brent") +minimise_function <- optimr(intial_values, square_function) +minimise_function <- optimr(par = intial_values, fn=square_function, method = "Brent") +install.packages("Brent") +minimise_function <- optimize(f=square_function, lower = -10, upper=10) +minimise_function$par +minimise_function +minimise_function <- optimize(f=square_function, lower = -10000000, upper=100000000) +minimise_function +cube_function <- function(x){ +y = x^3 +return(y) +} +minimise_function <- optimize(f=cube_function, lower = -10000000, upper=100000000) +minimise_function +sinus_function <- function(x){ +y = sin(x) +return(y) +} +minimise_function <- optimize(f=sinus_function, lower = -10000000, upper=100000000) +minimise_function +bivariate_function <- function(x,y){ +z <- 2*x*(y**2)+2*(x**2)*y+x*y +return(z) +} +# 1. First try a few values of x, y and see how it affect z +x<- seq(-0.5,0.5, len=200) +y<- seq(-0.5,0.5, len=200) +z <- outer(x,y,bivariate_function) +persp(x,y,z, theta=-30,phi=15,ticktype="detailed") +image(x,y,z) +bivariate_function_vector <- function(vec){ +x <- vec[1] +y <- vec[2] +z <- 2*x*(y**2)+2*(x**2)*y+x*y +return(z) +} +minimise_function_bivariate <- optimr(par = c(0.5,0.5), bivariate_function_vector, control=list(fnscale=-1)) +minimise_function_bivariate$par +minimise_function_bivariate <- optimr(par = c(0.5,0.5), bivariate_function_vector) +minimise_function_bivariate$par +minimise_function_bivariate$par +minimise_function_bivariate <- optimr(par = c(0.5,0.5), bivariate_function) +minimise_function_bivariate <- optimr(par = c(0.5,0.5), bivariate_function_vector) +minimise_function_bivariate$par +bivariate_function_vector <- function(vec){ +x <- vec[1] +y <- vec[2] +z <- (1-x)^2 + 100*(y-x^2) +return(z) +} +minimise_function_bivariate <- optimr(par = c(0,0), bivariate_function_vector) +minimise_function_bivariate$par +bivariate_function_vector <- function(vec){ +x <- vec[1] +y <- vec[2] +z <- (1-x)^2 + 100*(y-x^2)^2 +return(z) +} +minimise_function_bivariate <- optimr(par = c(0,0), bivariate_function_vector) +minimise_function_bivariate$par +remvove(list=ls()) +remove(list=ls()) +getwd() +setwd("/Users/houdanaitelbarj/Desktop/PhoneAddiction/analysis/structural") +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Setup +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Import plotting functions and constants from lib file +source('input/lib/r/ModelFunctions.R') +# Import data +df <- import_data() +param %<>% +list.merge( +#get_opt(df), +get_taus(df, winsorize=winsorize, full=full), +get_mispredict(df), +get_ideal(df), +get_predict(df), +get_wtp(df), +get_avg_use(df), +get_fb(df), +get_limit_last_week(df) +) +param <- param_initial +param %<>% +list.merge( +#get_opt(df), +get_taus(df, winsorize=winsorize, full=full), +get_mispredict(df), +get_ideal(df), +get_predict(df), +get_wtp(df), +get_avg_use(df), +get_fb(df), +get_limit_last_week(df) +) +winsorize=F +full=F +param %<>% +list.merge( +#get_opt(df), +get_taus(df, winsorize=winsorize, full=full), +get_mispredict(df), +get_ideal(df), +get_predict(df), +get_wtp(df), +get_avg_use(df), +get_fb(df), +get_limit_last_week(df) +) +View(param) +param %<>% +solve_sys_eq_1 %>% +as.list %>% +list.merge(param) +# Solve system of equations #2 +param %<>% +solve_sys_eq_2(display_warning=display_warning) %>% +as.list %>% +list.merge(param) +display_warning=FALSE +# Solve system of equation #1 +param %<>% +solve_sys_eq_1 %>% +as.list %>% +list.merge(param) +# Solve system of equations #2 +param %<>% +solve_sys_eq_2(display_warning=display_warning) %>% +as.list %>% +list.merge(param) +param %<>% +solve_sys_eq_3 %>% +as.list %>% +list.merge(param) +# Solve for individual effects +tau_L_2_spec <- find_tau_L2_spec(df) +tau_tilde_spec <- find_tau_L3_spec(df) +x_ss_i_data <- calculate_x_ss_i_spec(df) +param %<>% +solve_effects_individual(x_ss_i_data= x_ss_i_data, tau_tilde_L=tau_tilde_spec, tau_L_2=tau_L_2_spec, w=df$w)%>% +as.list %>% +list.merge(param) +rho <- param[['rho']] +lambda <- param[['lambda']] +rho_res <- param[['rho_res']] +lambda_res <- param[['lambda_res']] +delta <- param[['delta']] +alpha <- param[['alpha']] +omega <- param[['omega']] +omega_est <- param[['omega_est']] +mispredict <- param[['mispredict']] +d_L <- param[['d_L']] +d_CL <- param[['d_CL']] +eta <- param[['eta']] +zeta <- param[['zeta']] +naivete <- param[['naivete']] +gamma_L_effect <- param[['gamma_L_effect']] +gamma_tilde_L_effect <- param[['gamma_tilde_L_effect']] +gamma_tilde_L_effect_omega <- param[['gamma_tilde_L_effect_omega']] +gamma_L_effect_omega <- param[['gamma_L_effect_omega']] +gamma_L_effect_multiple <- param[['gamma_L_effect_multiple']] +gamma_tilde_L_effect_multiple <- param[['gamma_tilde_L_effect_multiple']] +gamma_L <- param[['gamma_L']] +gamma_tilde_L <- param[['gamma_tilde_L']] +gamma_tilde_L_omega <- param[['gamma_tilde_L_omega']] +gamma_L_omega <- param[['gamma_L_omega']] +gamma_tilde_L_multiple <- param[['gamma_tilde_L_multiple']] +gamma_L_multiple <- param[['gamma_L_multiple']] +gamma_B <- param[['gamma_B']] +gamma_tilde_B <- param[['gamma_tilde_B']] +gamma_tilde_B_multiple <- param[['gamma_tilde_B_multiple']] +gamma_B_multiple <- param[['gamma_B_multiple']] +eta_res <- param[['eta_res']] +zeta_res <- param[['zeta_res']] +naivete_res <- param[['naivete_res']] +gamma_L_effect_res <- param[['gamma_L_effect_res']] +gamma_tilde_L_effect_res <- param[['gamma_tilde_L_effect_res']] +gamma_tilde_L_effect_omega_res <- param[['gamma_tilde_L_effect_omega_res']] +gamma_L_effect_omega_res <- param[['gamma_L_effect_omega_res']] +gamma_tilde_L_effect_multiple_res <- param[['gamma_tilde_L_effect_multiple_res']] +gamma_L_res <- param[['gamma_L_res']] +gamma_L_omega_res <- param[['gamma_L_omega_res']] +gamma_L_multiple_res <- param[['gamma_L_multiple_res']] +gamma_B_res <- param[['gamma_B_res']] +gamma_B_multiple_res <- param[['gamma_B_multiple_res']] +tau_L_2_signed <- param[['tau_L_2']]*-1 +# Gamma-spec +term1 <- (1-alpha)*delta*rho +term2 <- term1*(1+lambda) +term3 <- (eta*lambda + zeta*(1 - lambda))*(rho*tau_L_2/omega) +num <- eta*tau_L_2/omega - term1*term3 - term2*naivete +denom <- 1 - term2 +num_omega <- eta*tau_L_2/omega_est - term1*term3 - term2*naivete +gamma_spec <- num/denom +gamma_spec_omega <- num_omega/denom +gamma_tilde_spec <- gamma_spec - naivete +gamma_tilde_spec_omega <- gamma_spec_omega - naivete +tau_L_2 <- param[['tau_L_2']] +# Gamma-spec +term1 <- (1-alpha)*delta*rho +term2 <- term1*(1+lambda) +term3 <- (eta*lambda + zeta*(1 - lambda))*(rho*tau_L_2/omega) +num <- eta*tau_L_2/omega - term1*term3 - term2*naivete +denom <- 1 - term2 +num_omega <- eta*tau_L_2/omega_est - term1*term3 - term2*naivete +gamma_spec <- num/denom +gamma_spec_omega <- num_omega/denom +gamma_tilde_spec <- gamma_spec - naivete +gamma_tilde_spec_omega <- gamma_spec_omega - naivete +intercept_spec <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_B <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B, gamma_B, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L, gamma_L, alpha, rho, lambda, mispredict, eta, zeta) +intercept_spec_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec_omega, gamma_spec_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_omega, gamma_L_effect_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_omega, gamma_L_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_multiple, gamma_L_effect_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_B_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B_multiple, gamma_B_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_multiple, gamma_L_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_eta_high <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta, eta_scale=1.1) +intercept_het_L_effect_eta_low <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta, eta_scale=0.9) +x_ss_spec <- calculate_steady_state(param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta, intercept_spec) +x_ss_zero_un <- calculate_steady_state(param, 0, 0, alpha, rho, lambda, 0, eta, zeta, intercept_spec) +x_ss_zero <- ifelse(x_ss_zero_un<0, 0, x_ss_zero_un) +delta_x <- x_ss_spec - x_ss_zero +x_ss_spec_w <- weighted.mean(x_ss_spec, w, na.rm=T) +w=df$w +x_ss_spec_w <- weighted.mean(x_ss_spec, w, na.rm=T) +rho <- param[['rho']] +lambda <- param[['lambda']] +rho_res <- param[['rho_res']] +lambda_res <- param[['lambda_res']] +delta <- param[['delta']] +alpha <- param[['alpha']] +omega <- param[['omega']] +omega_est <- param[['omega_est']] +mispredict <- param[['mispredict']] +d_L <- param[['d_L']] +d_CL <- param[['d_CL']] +eta <- param[['eta']] +zeta <- param[['zeta']] +naivete <- param[['naivete']] +gamma_L_effect <- param[['gamma_L_effect']] +gamma_tilde_L_effect <- param[['gamma_tilde_L_effect']] +gamma_tilde_L_effect_omega <- param[['gamma_tilde_L_effect_omega']] +gamma_L_effect_omega <- param[['gamma_L_effect_omega']] +gamma_L_effect_multiple <- param[['gamma_L_effect_multiple']] +gamma_tilde_L_effect_multiple <- param[['gamma_tilde_L_effect_multiple']] +gamma_L <- param[['gamma_L']] +gamma_tilde_L <- param[['gamma_tilde_L']] +gamma_tilde_L_omega <- param[['gamma_tilde_L_omega']] +gamma_L_omega <- param[['gamma_L_omega']] +gamma_tilde_L_multiple <- param[['gamma_tilde_L_multiple']] +gamma_L_multiple <- param[['gamma_L_multiple']] +gamma_B <- param[['gamma_B']] +gamma_tilde_B <- param[['gamma_tilde_B']] +gamma_tilde_B_multiple <- param[['gamma_tilde_B_multiple']] +gamma_B_multiple <- param[['gamma_B_multiple']] +eta_res <- param[['eta_res']] +zeta_res <- param[['zeta_res']] +naivete_res <- param[['naivete_res']] +gamma_L_effect_res <- param[['gamma_L_effect_res']] +gamma_tilde_L_effect_res <- param[['gamma_tilde_L_effect_res']] +gamma_tilde_L_effect_omega_res <- param[['gamma_tilde_L_effect_omega_res']] +gamma_L_effect_omega_res <- param[['gamma_L_effect_omega_res']] +gamma_tilde_L_effect_multiple_res <- param[['gamma_tilde_L_effect_multiple_res']] +gamma_L_res <- param[['gamma_L_res']] +gamma_L_omega_res <- param[['gamma_L_omega_res']] +gamma_L_multiple_res <- param[['gamma_L_multiple_res']] +gamma_B_res <- param[['gamma_B_res']] +gamma_B_multiple_res <- param[['gamma_B_multiple_res']] +tau_L_2 <- param[['tau_L_2']] +tau_L_2_signed <- param[['tau_L_2']]*-1 +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Calculate individual intercepts and steady states under different strategies - Unrestricted alpha +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Gamma-spec +term1 <- (1-alpha)*delta*rho +term2 <- term1*(1+lambda) +term3 <- (eta*lambda + zeta*(1 - lambda))*(rho*tau_L_2/omega) +num <- eta*tau_L_2/omega - term1*term3 - term2*naivete +denom <- 1 - term2 +num_omega <- eta*tau_L_2/omega_est - term1*term3 - term2*naivete +gamma_spec <- num/denom +gamma_spec_omega <- num_omega/denom +gamma_tilde_spec <- gamma_spec - naivete +gamma_tilde_spec_omega <- gamma_spec_omega - naivete +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Calculate individual intercepts and steady states under different strategies +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +intercept_spec <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_B <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B, gamma_B, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L, gamma_L, alpha, rho, lambda, mispredict, eta, zeta) +intercept_spec_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec_omega, gamma_spec_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_omega, gamma_L_effect_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_omega, gamma_L_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_multiple, gamma_L_effect_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_B_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B_multiple, gamma_B_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_multiple, gamma_L_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_eta_high <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta, eta_scale=1.1) +intercept_het_L_effect_eta_low <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta, eta_scale=0.9) +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Calculate individual counterfactuals +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +x_ss_spec <- calculate_steady_state(param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta, intercept_spec) +x_ss_zero_un <- calculate_steady_state(param, 0, 0, alpha, rho, lambda, 0, eta, zeta, intercept_spec) +x_ss_zero <- ifelse(x_ss_zero_un<0, 0, x_ss_zero_un) +delta_x <- x_ss_spec - x_ss_zero +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Calculate individual intercepts and steady states under different strategies - Restricted alpha +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Gamma-spec +alpha_res <- 1 +term1_res <- (1-alpha_res)*delta*rho_res +term2_res <- term1_res*(1+lambda_res) +term3_res <- (eta_res*lambda_res + zeta_res*(1 - lambda_res))*(rho_res*tau_L_2/omega) +num_res <- eta_res*tau_L_2/omega - term1_res*term3_res - term2_res*naivete_res +denom_res <- 1 - term2_res +num_omega_res <- eta_res*tau_L_2/omega_est - term1_res*term3_res - term2_res*naivete_res +gamma_spec_res <- num_res/denom_res +gamma_spec_omega_res <- num_omega_res/denom_res +gamma_tilde_spec_res <- gamma_spec_res - naivete_res +gamma_tilde_spec_omega_res <- gamma_spec_omega_res - naivete_res +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Calculate individual intercepts and steady states under different strategies +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +intercept_spec_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec_res, gamma_spec_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_L_effect_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_res, gamma_L_effect_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_B_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B, gamma_B_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_L_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L, gamma_L_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_spec_omega_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec_omega_res, gamma_spec_omega_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_L_effect_omega_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_omega_res, gamma_L_effect_omega_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_L_omega_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_omega, gamma_L_omega_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_L_effect_multiple_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_multiple, gamma_L_effect_multiple, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_B_multiple_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B_multiple, gamma_B_multiple, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_L_multiple_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_multiple, gamma_L_multiple, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res) +intercept_het_L_effect_eta_high_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res, eta_scale=1.1) +intercept_het_L_effect_eta_low_res <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res, eta_scale=0.9) +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Calculate individual counterfactuals +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +x_ss_spec_res <- calculate_steady_state(param, gamma_tilde_spec_res, gamma_spec_res, alpha = 1, rho_res, lambda_res, mispredict, eta = eta_res, zeta = zeta_res, intercept_spec_res) +x_ss_zero_un_res <- calculate_steady_state(param, 0, 0, alpha = 1, rho_res, lambda_res, 0, eta = eta_res, zeta = zeta_res, intercept_spec_res) +x_ss_zero_res <- ifelse(x_ss_zero_un_res<0, 0, x_ss_zero_un_res) +delta_x_res <- x_ss_spec_res - x_ss_zero_res +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Compute population averages +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +x_ss_spec_w <- weighted.mean(x_ss_spec, w, na.rm=T) +gamma_tilde_spec_w <- weighted.mean(gamma_tilde_spec, w, na.rm=T) +gamma_spec_w <- weighted.mean(gamma_spec, w, na.rm=T) +gamma_spec_omega_w <- weighted.mean(gamma_spec_omega, w, na.rm=T) +delta_x_spec <- weighted.mean(delta_x, w, na.rm=T) +x_ss_i_data <- weighted.mean(x_ss_i_data, w, na.rm=T) +remove(list=ls()) +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Setup +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Import plotting functions and constants from lib file +source('input/lib/r/ModelFunctions.R') +# Import data +df <- import_data() +param <- param_initial +winsorize=F, full=F, display_warning=FALS +winsorize=F +full=F +display_warning=FALSE +param %<>% +list.merge( +#get_opt(df), +get_taus(df, winsorize=winsorize, full=full), +get_mispredict(df), +get_ideal(df), +get_predict(df), +get_wtp(df), +get_avg_use(df), +get_fb(df), +get_limit_last_week(df) +) +# Solve system of equation #1 +param %<>% +solve_sys_eq_1 %>% +as.list %>% +list.merge(param) +# Solve system of equations #2 +param %<>% +solve_sys_eq_2(display_warning=display_warning) %>% +as.list %>% +list.merge(param) +# Solve system of equations #3 +param %<>% +solve_sys_eq_3 %>% +as.list %>% +list.merge(param) +# Solve for individual effects +tau_L_2_spec <- find_tau_L2_spec(df) +tau_tilde_spec <- find_tau_L3_spec(df) +x_ss_i_data <- calculate_x_ss_i_spec(df) +param %<>% +solve_effects_individual(x_ss_i_data= x_ss_i_data, tau_tilde_L=tau_tilde_spec, tau_L_2=tau_L_2_spec, w=df$w)%>% +as.list %>% +list.merge(param) +tau_tilde_L=tau_tilde_spec +tau_L_2=tau_L_2_spec +w=df$w +rho <- param[['rho']] +lambda <- param[['lambda']] +rho_res <- param[['rho_res']] +lambda_res <- param[['lambda_res']] +delta <- param[['delta']] +alpha <- param[['alpha']] +omega <- param[['omega']] +omega_est <- param[['omega_est']] +mispredict <- param[['mispredict']] +d_L <- param[['d_L']] +d_CL <- param[['d_CL']] +eta <- param[['eta']] +zeta <- param[['zeta']] +naivete <- param[['naivete']] +gamma_L_effect <- param[['gamma_L_effect']] +gamma_tilde_L_effect <- param[['gamma_tilde_L_effect']] +gamma_tilde_L_effect_omega <- param[['gamma_tilde_L_effect_omega']] +gamma_L_effect_omega <- param[['gamma_L_effect_omega']] +gamma_L_effect_multiple <- param[['gamma_L_effect_multiple']] +gamma_tilde_L_effect_multiple <- param[['gamma_tilde_L_effect_multiple']] +gamma_L <- param[['gamma_L']] +gamma_tilde_L <- param[['gamma_tilde_L']] +gamma_tilde_L_omega <- param[['gamma_tilde_L_omega']] +gamma_L_omega <- param[['gamma_L_omega']] +gamma_tilde_L_multiple <- param[['gamma_tilde_L_multiple']] +gamma_L_multiple <- param[['gamma_L_multiple']] +gamma_B <- param[['gamma_B']] +gamma_tilde_B <- param[['gamma_tilde_B']] +gamma_tilde_B_multiple <- param[['gamma_tilde_B_multiple']] +gamma_B_multiple <- param[['gamma_B_multiple']] +eta_res <- param[['eta_res']] +zeta_res <- param[['zeta_res']] +naivete_res <- param[['naivete_res']] +gamma_L_effect_res <- param[['gamma_L_effect_res']] +gamma_tilde_L_effect_res <- param[['gamma_tilde_L_effect_res']] +gamma_tilde_L_effect_omega_res <- param[['gamma_tilde_L_effect_omega_res']] +gamma_L_effect_omega_res <- param[['gamma_L_effect_omega_res']] +gamma_tilde_L_effect_multiple_res <- param[['gamma_tilde_L_effect_multiple_res']] +gamma_L_res <- param[['gamma_L_res']] +gamma_L_omega_res <- param[['gamma_L_omega_res']] +gamma_L_multiple_res <- param[['gamma_L_multiple_res']] +gamma_B_res <- param[['gamma_B_res']] +gamma_B_multiple_res <- param[['gamma_B_multiple_res']] +tau_L_2_signed <- param[['tau_L_2']]*-1 +# Gamma-spec +num <- eta*tau_L_2/omega - (1-alpha)*delta*rho*(((eta-zeta)*tau_tilde_L/omega+zeta*rho*tau_L_2/omega) + (1+lambda)*mispredict*(-eta+(1-alpha)*delta*rho^2*((eta-zeta)*lambda+zeta))) +denom <- 1 - (1-alpha)*delta*rho*(1+lambda) +num_omega <- eta*tau_L_2/omega_est - (1-alpha)*delta*rho*(((eta-zeta)*tau_tilde_L/omega_est+zeta*rho*tau_L_2/omega) + (1+lambda)*mispredict*(-eta+(1-alpha)*delta*rho^2*((eta-zeta)*lambda+zeta))) +gamma_spec <- num/denom +gamma_spec_omega <- num_omega/denom +gamma_tilde_spec <- gamma_spec - naivete +gamma_tilde_spec_omega <- gamma_spec_omega - naivete +intercept_spec <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_B <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B, gamma_B, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L, gamma_L, alpha, rho, lambda, mispredict, eta, zeta) +intercept_spec_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_spec_omega, gamma_spec_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_omega, gamma_L_effect_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_omega <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_omega, gamma_L_omega, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect_multiple, gamma_L_effect_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_B_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_B_multiple, gamma_B_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_multiple <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_multiple, gamma_L_multiple, alpha, rho, lambda, mispredict, eta, zeta) +intercept_het_L_effect_eta_high <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta, eta_scale=1.1) +intercept_het_L_effect_eta_low <- calculate_intercept_spec(x_ss_i_data, param, gamma_tilde_L_effect, gamma_L_effect, alpha, rho, lambda, mispredict, eta, zeta, eta_scale=0.9) +x_ss_spec <- calculate_steady_state(param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta, intercept_spec) +x_ss_spec <- calculate_steady_state(param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta, intercept_spec) +calculate_steady_state <- function(param, gamma_tilde, gamma, alpha, rho, lambda, mispredict, eta, zeta, intercept=NA, eta_scale=1){ +# Define +eta <- eta * eta_scale +delta <- param[['delta']] +p_B <- param[['p_B']] +# Calculate +p <- 0 +term_pre <- (1 - (1-alpha)*delta*rho) +term1 <- intercept - p*term_pre +term2 <- (1-alpha)*delta*rho +term3 <- (eta - zeta) * mispredict + gamma_tilde*(1+lambda) +num <- term1 - term2*term3 + gamma +terma <- term_pre*(-eta - zeta * (rho / (1 - rho))) +termb <- (1-alpha)*delta*rho*zeta +denom <- terma + termb +print(paste0("denom: ", denom)) +x_ss_calc <- num /denom +return(x_ss_calc) +} +x_ss_spec <- calculate_steady_state(param, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta, intercept_spec) +x_ss_zero_un <- calculate_steady_state(param, 0, 0, alpha, rho, lambda, 0, eta, zeta, intercept_spec) +x_ss_zero <- ifelse(x_ss_zero_un<0, 0, x_ss_zero_un) +delta_x <- x_ss_spec - x_ss_zero +x_ss_spec_w <- weighted.mean(x_ss_spec, w, na.rm=T) diff --git a/17/replication_package/code/analysis/structural/README.md b/17/replication_package/code/analysis/structural/README.md new file mode 100644 index 0000000000000000000000000000000000000000..939e53eb93c6b3a31c6a39317e949e4b6b4a1531 --- /dev/null +++ b/17/replication_package/code/analysis/structural/README.md @@ -0,0 +1,6 @@ +# README + +This module estimates parameters and generates plots for our structural model. + +`/code/` contains the below file: +* StructuralModel.R diff --git a/17/replication_package/code/analysis/structural/code/StructuralModel.R b/17/replication_package/code/analysis/structural/code/StructuralModel.R new file mode 100644 index 0000000000000000000000000000000000000000..8f858959ce975ea3295f99990e9967bd46628fab --- /dev/null +++ b/17/replication_package/code/analysis/structural/code/StructuralModel.R @@ -0,0 +1,295 @@ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Setup +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +# Import plotting functions and constants from lib file +source('input/lib/r/ModelFunctions.R') + +# Import data +df <- import_data() +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Nice scalars +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +nice_scalars <- function(param){ + + limiteffectlastweeknice <- signif(param$limit_effect_last_week, digits=2) * -1 + limiteffect <- signif(param$tau_L, digits=2) * -1 + mispredictnice <- signif(param$mispredict, digits=2) + + tautildenice <- signif(param$tau_tilde_B, digits=2)* -1 + + + # pctirrationaltwo <- param$rho_tilde/param$rho + #pctirrationaltwo <- signif(pctirrationaltwo, digits=2) + + mispredictpct <- param$mispredict/param$x_ss + mispredictpct <- signif(mispredictpct, digits=2)*100 + + pctreductiontemptation <- param$delta_x_temptation/ param$x_ss + pctreductiontemptationres <- param$delta_x_temptation_res/ param$x_ss + + pctreductiontemptation <- signif(pctreductiontemptation, digits=2)*100 + pctreductiontemptationres <- signif(pctreductiontemptationres, digits=2)*100 + + + dLpercent <- param$d_L/100 + dLpercent <- signif(dLpercent, digits=3) + + dCLpercent <- param$d_CL/100 + dCLpercent <- signif(dCLpercent, digits=2) + + taubtwonice <- signif(param$tau_B_2, digits=2) * -1 + taubtwofullnice <- signif(param$tau_B_2_full , digits=2) * -1 + taubthreenice <- signif(param$tau_B_3, digits=2) * -1 + taubfournice <- signif(param$tau_B_4, digits=2) * -1 + taubfivenice <- signif(param$tau_B_5, digits=2) * -1 + + gammaLeffectnice <- signif(param$gamma_L_effect, digits=1) + gammaLnice <- signif(param$gamma_L, digits=2) + gammaBnice <- signif(param$gamma_B, digits=2) + + naivetenice <- signif(param$naivete, digits=2) + gammaLeffectresnice <- signif(param$gamma_L_effect_res, digits=1) + gammaLresnice <- signif(param$gamma_L_res, digits=2) + gammaBresnice <- signif(param$gamma_B_res, digits=2) + + naiveteresnice <- signif(param$naivete_res, digits=2) + + attritionratenice <- signif(param$attritionrate, digits=2)*100 + + + dLnice <- signif(param$d_L, digits=2)*-1 + dCLnice <- signif(param$d_CL, digits=2)*-1 + + underestimatetemp <- format(round(param$underestimatetemp,3), digits=2) + + tautildeBtwothreenice <- signif(param$tau_tilde_B_3_2, digits=2)*-1 + + MPLStwonice <- signif(param$MPL_S2, digits=2)*-1 + + tauLtwosigned <- signif(param$tau_L_2)*-1 + + + + + #Have hourly variables + gammaBnicehour <- gammaBnice*60 + gammaLnicehour <- gammaLnice*60 + gammaLeffectnicehour <- gammaLeffectnice*60 + naivetenicehour <- naivetenice*60 + gammaBresnicehour <- gammaBresnice*60 + gammaLresnicehour <- gammaLresnice*60 + gammaLeffectresnicehour <- gammaLeffectresnice*60 + naiveteresnicehour <- naiveteresnice*60 + taubtwohour <- taubtwonice*60 + + + + # Return + solution <- list( + mispredictnice = mispredictnice, + tautildenice = tautildenice, + taubtwonice = taubtwonice, + gammaLeffectnice = gammaLeffectnice, + gammaLnice = gammaLnice, + gammaBnice = gammaBnice, + naivetenice = naivetenice, + gammaLeffectnicehour = gammaLeffectnicehour, + gammaLnicehour = gammaLnicehour, + gammaBnicehour = gammaBnicehour, + naivetenicehour = naivetenicehour, + taubtwohour = taubtwohour, + gammaLeffectresnice = gammaLeffectresnice, + gammaLresnice = gammaLresnice, + gammaBresnice = gammaBresnice, + naiveteresnice = naiveteresnice, + gammaLeffectresnicehour = gammaLeffectresnicehour, + gammaLresnicehour = gammaLresnicehour, + gammaBresnicehour = gammaBresnicehour, + naiveteresnicehour = naiveteresnicehour, + dLnice = dLnice, + dCLnice = dCLnice, + dLpercent = dLpercent, + dCLpercent = dCLpercent, + underestimatetemp = underestimatetemp, + tautildeBtwothreenice = tautildeBtwothreenice, + limiteffect = limiteffect, + attritionratenice = attritionratenice, + taubthreenice = taubthreenice, + taubfournice = taubfournice, + taubfivenice = taubfivenice, + pctreductiontemptation = pctreductiontemptation, + pctreductiontemptationres = pctreductiontemptationres, + MPLStwonice = MPLStwonice, + mispredictpct = mispredictpct, + taubtwofullnice = taubtwofullnice, + tauLtwosigned = tauLtwosigned, + limiteffectlastweeknice = limiteffectlastweeknice) + + return(solution) +} + + + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Full model, taub2=full +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +# Define constants +param <- param_initial + +# Estimate model +param_full <- estimate_model(df, param, full=T, display_warning=F) + +# Add some auto-import figures +param_additional_full_taub2 <- + param_full %>% + as.list %>% + list.merge(param_full) + +save_tex(param_additional_full_taub2, filename="structural_fulltaub2", suffix="fulltaubtwo") + + +df$w <- 1 + +results <- vector(mode = "list", length = size) + +results <- run_boot_procedure(run_boot_iter_full) + +# Get bootstrap distribution +bottom <- lapply(results, find_bottom) +top <- lapply(results, find_top) + +save_boot_tex_percentile(bottom, top, + suffix="bootfulltaubtwo", + filename="structural_boot_fulltaubtwo") + + + + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Full model, taub2 half period +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +# Define constants +param <- param_initial + +# Estimate model +param_full <- estimate_model(df, param) + +print(param_full$eta) +print(param_full$zeta) +check_steady_state(param_full) + + + + +# Add some auto-import figures +param_additional <- + param_full %>% + as.list %>% + list.merge(param_full) + +save_tex(param_additional, filename="structural") + +# Add some auto-import figures +param_additional_two <- + param_full %>% + as.list %>% + list.merge(param_full) + + +save_tex2(param_additional_two, filename="structural_two", suffix="twodigits") + + +param_additional_nice <- + param_full %>% + nice_scalars %>% + as.list %>% + list.merge(param_full) + +save_tex_nice(param_additional_nice, filename="structural_nice", suffix="nice") + + + + + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Balanced model +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + + +# Add weights +df %<>% balance_data(magnitude=3) + +# Define constants +param <- param_initial +# Estimate model +param_balanced <- estimate_model(df, param, winsorize=T) + + +# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# # Bootstrap model no perceived habit formation +# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Revert to unbalanced +df$w <- 1 + +results <- vector(mode = "list", length = size) + + +results <- run_boot_procedure(run_boot_iter) + +# Get bootstrap distribution +bottom <- lapply(results, find_bottom) +top <- lapply(results, find_top) + +plot_time_effects(param_full, bottom, top, filename="structural_time_effects_plot") + +save_boot_tex_percentile(bottom, top, + suffix="boot", + filename="structural_boot") + +plot_decomposition_boot(param_full, bottom, top, + filename="structural_decomposition_plot_boot") + +plot_decomposition_boot_unique(param_full, bottom, top, + filename="structural_decomposition_plot_boot_restricted") + +plot_decomposition_boot_etas(param_full, bottom, top, + filename="structural_decomposition_plot_boot_restricted_etas") + +plot_time_effects_both_est(param_full, bottom, top, filename="time_effects_both_est") + + + + + + +# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# # Bootstrap balanced model no perceived +# # ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +results_bal <- vector(mode = "list", length = size) + + +#CHANGE MAGNITUDE OF WEIGHTS ONCE ITS SORTED! +results_bal <- run_boot_procedure(run_boot_iter_bal) + + +# Get bootstrap distribution +median_bal <- lapply(results_bal, median, na.rm = T) +sdevs_bal <- lapply(results_bal, sd, na.rm = T) +bottom_bal <- lapply(results_bal, find_bottom) +top_bal <- lapply(results_bal, find_top) + +save_tex(param_balanced, filename="balanced_median", suffix="balancedmedian") + +#For restricted model + +plot_time_effects_bal(param_full, param_balanced, bottom, top, bottom_bal, top_bal, filename="time_effects_balanced") +plot_time_effects_both(param_full, param_balanced, bottom, top, bottom_bal, top_bal, filename="time_effects_both") + + +save_boot_tex_percentile(bottom_bal, top_bal, + suffix="balanced", + filename="balanced_boot") diff --git a/17/replication_package/code/analysis/structural/input.txt b/17/replication_package/code/analysis/structural/input.txt new file mode 100644 index 0000000000000000000000000000000000000000..533a98f00574bbbd5f526ac13b140a75e75eeafd --- /dev/null +++ b/17/replication_package/code/analysis/structural/input.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:1aacb5e3a47846afaf251dbe069f5cee136e1255fe4ec43721a033fafc1d837d +size 812 diff --git a/17/replication_package/code/analysis/structural/make.py b/17/replication_package/code/analysis/structural/make.py new file mode 100644 index 0000000000000000000000000000000000000000..a88fe6c1941d7eca65f3360be8513d8dfc926034 --- /dev/null +++ b/17/replication_package/code/analysis/structural/make.py @@ -0,0 +1,67 @@ +################### +### ENVIRONMENT ### +################### +import git +import imp +import os + +### SET DEFAULT PATHS +ROOT = '../..' + +PATHS = { + 'root' : ROOT, + 'lib' : os.path.join(ROOT, 'lib'), + 'config' : os.path.join(ROOT, 'config.yaml'), + 'config_user' : os.path.join(ROOT, 'config_user.yaml'), + 'input_dir' : 'input', + 'external_dir' : 'external', + 'output_dir' : 'output', + 'output_local_dir' : 'output_local', + 'makelog' : 'log/make.log', + 'output_statslog' : 'log/output_stats.log', + 'source_maplog' : 'log/source_map.log', + 'source_statslog' : 'log/source_stats.log', +} + +### LOAD GSLAB MAKE +f, path, desc = imp.find_module('gslab_make', [PATHS['lib']]) +gs = imp.load_module('gslab_make', f, path, desc) + +### LOAD CONFIG USER +PATHS = gs.update_paths(PATHS) +gs.update_executables(PATHS) + +############ +### MAKE ### +############ + +### START MAKE +gs.remove_dir(['input', 'external']) +gs.clear_dir(['output', 'log', 'temp']) +gs.start_makelog(PATHS) + +### GET INPUT FILES +inputs = gs.link_inputs(PATHS, ['input.txt']) +# gs.write_source_logs(PATHS, inputs + externals) +# gs.get_modified_sources(PATHS, inputs + externals) + +### RUN SCRIPTS +""" +Critical +-------- +Many of the Stata analysis scripts recode variables using +the `recode` command. Double-check all `recode` commands +to confirm recoding is correct, especially when reusing +code for a different experiment version. +""" + +gs.run_r(PATHS, program = 'code/StructuralModel.R') + +### LOG OUTPUTS +gs.log_files_in_output(PATHS) + +### CHECK FILE SIZES +#gs.check_module_size(PATHS) + +### END MAKE +gs.end_makelog(PATHS) diff --git a/17/replication_package/code/analysis/treatment_effects/README.md b/17/replication_package/code/analysis/treatment_effects/README.md new file mode 100644 index 0000000000000000000000000000000000000000..49c0b91d703153c0e76fd08f84180b96f6cf7b13 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/README.md @@ -0,0 +1,22 @@ +# README + +This module produces model-free estimates of treatment effects. + +`/code/` contains the below files : +* Beliefs.do (compares actual treatment effect with predicted treatment effect) + +* CommitmentResponse.do (plots how treatment effect differs by SMS addiction scale and other survey indicators) + +* FDRTable.do (estimates how treatment effect differs by SMS addiction scale and other indicators, adjusted for false-discovery rate. Also plots some descriptive statistics) + +* HabitFormation.do (compares actual and predicted usage) + +* Heterogeneity.do (plots heterogeneous treatment effects) + +* HeterogeneityInstrumental.do (plots heterogeneous treatment effects) + +* ModelHeterogeneity.R (generates other heterogeneity plots, some temptation plots) + +* SurveyValidation.do (plots effect of rewarding accurate usage prediction on usage prediction accuracy) + +The script `ModelHeterogeneity.R` requires the dataset `AnalysisUser.dta` when calling the function `get_opt()`. This function computes the number of users who opted out of the limit functionality. Since this dataset contains PII, it has been omitted from this replication package. As such, the call to `get_opt()` (l.1396) has been commented out so it does not prevent the user from smoothly running this module. diff --git a/17/replication_package/code/analysis/treatment_effects/code/Beliefs.do b/17/replication_package/code/analysis/treatment_effects/code/Beliefs.do new file mode 100644 index 0000000000000000000000000000000000000000..f7f04cc62a2bde23be508fcc64b11874a320a931 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/Beliefs.do @@ -0,0 +1,359 @@ +// Naivete about past and future usage + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" +end + +program define_plot_settings + global CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(#6) /// + xsize(6.5) ysize(4.5) /// + + global CISPIKE_HORIZONTAL_GRAPHOPTS /// + xlabel(#6) /// + xsize(6.5) ysize(8.5) + + global CISPIKE_STACKED_GRAPHOPTS /// + xcommon row(2) /// + graphregion(color(white)) /// + xsize(6.5) ysize(8.5) + + global CISPIKE_SETTINGS /// + spikecolor(maroon black navy gray) /// + cicolor(maroon black navy gray) /// + spike(msymbol(O)||msymbol(S)||msymbol(D)||msymbol(T)) + + global COEFPLOT_VERTICAL_SETTINGS /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) /// + yline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) /// + legend(region(lcolor(white))) /// + xsize(6.5) ysize(4.5) /// + ytitle("Treatment effect (minutes/day)" " ") + + global COLOR_MAROON /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) + + global COLOR_GRAY /// + mcolor(gray) ciopts(recast(rcap) lcolor(gray)) + + global COLOR_BLACK /// + mcolor(black) ciopts(recast(rcap) lcolor(black)) + + global COLOR_NAVY /// + mcolor(navy) ciopts(recast(rcap) lcolor(navy)) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + plot_naivete_all + plot_naivete_all, sixty + plot_naivete_all, hundred + + reg_bonus + reg_bonus_S2 + reg_bonus_new +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program plot_naivete_all + syntax, [sixty hundred] + + local suffix "" + local winsorization "W0" + if ("`sixty'" == "sixty"){ + local suffix "_W" + local winsorization "W60" + } + if ("`hundred'" == "hundred"){ + local suffix "_W100" + local winsorization "W100" + } + + * Preserve data + preserve + + * Reshape data + rename PD_*_UsageFITSBY UsageActual_* + + rename S2_PredictUseNext_1`suffix' UsagePredicted0_P2 + + rename S3_PredictUseNext_1`suffix' UsagePredicted0_P3 + rename S2_PredictUseNext_2`suffix' UsagePredicted1_P3 + + rename S4_PredictUseNext_1`suffix' UsagePredicted0_P4 + rename S3_PredictUseNext_2`suffix' UsagePredicted1_P4 + rename S2_PredictUseNext_3`suffix' UsagePredicted2_P4 + + rename S4_PredictUseNext_2`suffix' UsagePredicted1_P5 + rename S3_PredictUseNext_3`suffix' UsagePredicted2_P5 + + keep UserID S3_Bonus S2_LimitType UsagePredicted* UsageActual* + keep UserID S3_Bonus S2_LimitType *_P2 *_P3 *_P4 *_P5 + reshape long Usage, i(UserID S3_Bonus S2_LimitType) j(j) string + + split j, p(_) + rename (j1 j2) (measure time) + + * Recode data + encode time, generate(time_encode) + encode measure, generate(measure_encode) + + recode time_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (3 = 3 "Period 4") /// + (4 = 4 "Period 5"), /// + gen(time_recode) + + recode measure_encode /// + (1 = 1 "Actual") /// + (2 = 2 "Survey t prediction") /// + (3 = 3 "Survey t-1 prediction") /// + (4 = 4 "Survey t-2 prediction"), /// + gen(measure_recode) + + * Plot data + cispike Usage if S3_Bonus == 0 & S2_LimitType == 0, /// + over1(measure_recode) over2(time_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Usage (minutes/day)" " ")) + + graph export "output/cispike_naivete_BcontrolxLcontrol_`winsorization'.pdf", replace + + cispike Usage if S3_Bonus == 1 & S2_LimitType == 0, /// + over1(measure_recode) over2(time_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Usage (minutes/day)" " ")) + + graph export "output/cispike_naivete_BtreatxLcontrol_`winsorization'.pdf", replace + + cispike Usage if S3_Bonus == 0 & S2_LimitType > 0, /// + over1(measure_recode) over2(time_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Usage (minutes/day)" " ")) + + graph export "output/cispike_naivete_BcontrolxLtreat_`winsorization'.pdf", replace + + cispike Usage if S3_Bonus == 1 & S2_LimitType > 0, /// + over1(measure_recode) over2(time_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Usage (minutes/day)" " ")) + + graph export "output/cispike_naivete_BtreatxLtreat_`winsorization'.pdf", replace + + * Restore data + restore +end + +program reg_bonus + est clear + + preserve + + gen S1_Usage_FITSBY = PD_P1_UsageFITSBY + gen S3_Usage_FITSBY = PD_P3_UsageFITSBY + gen S4_Usage_FITSBY = PD_P4_UsageFITSBY + gen S5_Usage_FITSBY = PD_P5_UsageFITSBY + + gen S2_Predict_FITSBY = S2_PredictUseNext_1_W + gen S3_Predict_FITSBY = S3_PredictUseNext_1_W + gen S4_Predict_FITSBY = S3_PredictUseNext_2_W + gen S5_Predict_FITSBY = S3_PredictUseNext_3_W + + * Run regressions + foreach survey in S3 S4 S5 { + local yvar `survey'_Usage_FITSBY + local baseline S1_Usage_FITSBY + + gen_treatment, suffix(_`survey') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`survey') simple + est store `yvar' + } + + * Run regressions + foreach survey in S3 S4 S5 { + local yvar `survey'_Predict_FITSBY + local baseline S1_Usage_FITSBY + + gen_treatment, suffix(_`survey') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`survey') simple + est store `yvar' + } + + gen S2_reduction = S2_PredictUseInitial_W * - (S2_PredictUseBonus / 100) + + cap drop B_S3 + gen B_S3 = 1 + reg S2_reduction B_S3, noconstant + est store S2_reduction + + * Plot regressions (by period) + coefplot (*Usage*, label("Actual") $COLOR_MAROON msymbol(O)) /// + (S2_reduction, label("Survey 2 MPL prediction") $COLOR_NAVY msymbol(S)) /// + (*Predict*, label("Survey 3 prediction") $COLOR_GRAY msymbol(D)), /// + keep(B_*) /// + vertical /// + $COEFPLOT_VERTICAL_SETTINGS /// + xlabel(1 "Period 3" 2 "Period 4" 3 "Period 5", /// + valuelabel angle(0)) + + graph export "output/coef_belief_bonus_effect.pdf", replace + + restore +end + +program reg_bonus_new + est clear + + preserve + + gen S1_Usage_FITSBY = PD_P1_UsageFITSBY + gen S2_Usage_FITSBY = PD_P2_UsageFITSBY + gen S3_Usage_FITSBY = PD_P3_UsageFITSBY + gen S4_Usage_FITSBY = PD_P4_UsageFITSBY + gen S5_Usage_FITSBY = PD_P5_UsageFITSBY + + gen S2_Predict_FITSBY = S2_PredictUseNext_1_W + gen S3_Predict_FITSBY = S3_PredictUseNext_1_W + gen S4_Predict_FITSBY = S3_PredictUseNext_2_W + gen S5_Predict_FITSBY = S3_PredictUseNext_3_W + + * Run regressions + foreach survey in S2 S3 S4 S5 { + local yvar `survey'_Usage_FITSBY + local baseline S1_Usage_FITSBY + + gen_treatment, suffix(_`survey') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`survey') simple + est store `yvar' + } + + * Run regressions + foreach survey in S3 S4 S5 { + local yvar `survey'_Predict_FITSBY + local baseline S1_Usage_FITSBY + + gen_treatment, suffix(_`survey') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`survey') simple + est store `yvar' + } + + gen S2_reduction = S2_PredictUseInitial_W * - (S2_PredictUseBonus / 100) + + cap drop B_S3 + gen B_S3 = 1 + reg S2_reduction B_S3, noconstant + est store S2_reduction + + matrix C = J(3,1,.) + matrix rownames C = mean ll ul + matrix colnames C = B_S2 + + * TODO: make this reproducible + matrix C[1,1] = -16.11756 \ -19.64522 \ -12.62825 + matrix list C + coefplot matrix(C), ci((2 3)) + + * Plot regressions (by period) + coefplot (matrix(C), ci((2 3)) label("Makes {&alpha} = 0") $COLOR_BLACK) /// + (S2_reduction, label("Survey 2 MPL prediction") $COLOR_NAVY) /// + (*Usage*, label("Actual") $COLOR_MAROON) /// + (*Predict*, label("Survey 3 prediction") $COLOR_GRAY), /// + keep(B_*) /// + vertical /// + $COEFPLOT_VERTICAL_SETTINGS /// + xlabel(1 "Period 2" 2 "Period 3" 3 "Period 4" 4 "Period 5", /// + valuelabel angle(0)) + + graph export "output/coef_belief_bonus_effect_new.pdf", replace + + restore +end + +program reg_bonus_S2 + + est clear + + preserve + + gen S1_Usage_FITSBY = PD_P1_UsageFITSBY + gen S2_Usage_FITSBY = PD_P2_UsageFITSBY + gen S3_Usage_FITSBY = PD_P3_UsageFITSBY + + gen S2_Predict_FITSBY = S2_PredictUseNext_1_W + gen S3_Predict_FITSBY = S2_PredictUseNext_2_W + + * Run regressions + foreach survey in S2 S3 { + local yvar `survey'_Usage_FITSBY + local baseline S1_Usage_FITSBY + + gen_treatment, suffix(_`survey') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`survey') simple + est store `yvar' + } + + * Run regressions + foreach survey in S2 S3 { + local yvar `survey'_Predict_FITSBY + local baseline S1_Usage_FITSBY + + gen_treatment, suffix(_`survey') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`survey') simple + est store `yvar' + } + + gen S2_reduction = S2_PredictUseInitial_W * - (S2_PredictUseBonus / 100) + + cap drop B_S2 + gen B_S2 = 1 + reg S2_reduction B_S2, noconstant + est store S2_reduction + + + * Plot regressions (by period) + coefplot (*Usage*, label("Actual") $COLOR_MAROON) /// + (*Predict*, label("Predicted") $COLOR_GRAY) /// + (S2_reduction, label("Bonus Predicted") $COLOR_NAVY), /// + keep(B_*) /// + vertical /// + $COEFPLOT_VERTICAL_SETTINGS /// + xlabel(1 "Period 2" 2 "Period 3", /// + valuelabel angle(0)) + + graph export "output/coef_belief_bonus_survey2.pdf", replace + + restore +end +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/treatment_effects/code/CommitmentResponse.do b/17/replication_package/code/analysis/treatment_effects/code/CommitmentResponse.do new file mode 100644 index 0000000000000000000000000000000000000000..ab36dd0dfa7c980a683137d393738f4d7cba7e48 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/CommitmentResponse.do @@ -0,0 +1,1404 @@ +// Response to commitment, moderated by demand for flexibility + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" + yaml global STRATA = YAML.metadata.strata +end + +program define_plot_settings + global CISPIKE_SETTINGS /// + spikecolor(maroon black gray) /// + cicolor(maroon black gray) + + global CISPIKE_DOUBLE_SETTINGS /// + spike(yaxis(1) || yaxis(2)) /// + ci(yaxis(1) || yaxis(2)) /// + spikecolor(maroon gray) /// + cicolor(maroon gray) + + global CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(#6) /// + xsize(6.5) ysize(4.5) /// + legend(cols(4)) + + global COLOR_MAROON /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) + + global COLOR_MAROON_LIGHT /// + mcolor(maroon*0.9) ciopts(recast(rcap) lcolor(maroon*0.9)) + + global COLOR_MAROON_DARK /// + mcolor(maroon*1.1) ciopts(recast(rcap) lcolor(maroon*1.1)) + + + global COLOR_GRAY_LIGHT /// + mcolor(gray*0.9) ciopts(recast(rcap) lcolor(gray*0.9)) + + global COLOR_GRAY_DARK /// + mcolor(gray*1.1) ciopts(recast(rcap) lcolor(gray*1.1)) + + + global COLOR_BLUE /// + mcolor(edkblue) ciopts(recast(rcap) lcolor(edkblue)) + + global COLOR_BLACK /// + mcolor(black) ciopts(recast(rcap) lcolor(black)) + + global COLOR_GRAY /// + mcolor(gray) ciopts(recast(rcap) lcolor(gray)) + + global COLOR_NAVY /// + mcolor(navy) ciopts(recast(rcap) lcolor(navy)) + + global COLOR_NAVY_LIGHT /// + mcolor(navy*0.5) ciopts(recast(rcap) lcolor(navy*0.5)) + + global COEFPLOT_SETTINGS_MINUTES /// + vertical /// + yline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) /// + legend(cols(4) region(lcolor(white))) /// + xsize(6.5) ysize(4.5) /// + ytitle("Treatment effect (minutes/day)" " ") + + global COEFPLOT_SETTINGS_THIN /// + vertical /// + yline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) /// + legend(cols(4) region(lcolor(white))) /// + xsize(4.5) ysize(4.5) /// + ytitle("Treatment effect (minutes/day)" " ") + + global COEFPLOT_SETTINGS_STD /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(rows(1) region(lcolor(white))) /// + xsize(6.5) ysize(4.5) /// + xtitle(" " "Treatment effect (standard deviations per hour/day of use)") + + global COEFPLOT_SETTINGS_ITT /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(rows(1) region(lcolor(white))) /// + xsize(6.5) ysize(4.5) /// + xtitle(" " "Treatment effect (standard deviations)") + + global COEFPLOT_LABELS_LIMIT /// + coeflabels(L_1 = `"Snooze 0"' /// + L_2 = `"Snooze 2"' /// + L_3 = `"Snooze 5"' /// + L_4 = `"Snooze 20"' /// + L_5 = `"No snooze"' /// + L = `"Limit"' /// + B = `"Bonus"') + + global COEFPLOT_STACKED_GRAPHOPTS /// + ycommon row(2) /// + graphregion(color(white)) /// + xsize(6.5) ysize(8.5) + + global COEFPLOT_ADDICTION_SETTINGS /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(rows(1) region(lcolor(white))) /// + xsize(7) ysize(6.5) + + global ADDICTION_LABELS /// + xlabel(, labsize(small)) /// + xtitle(, size(small)) /// + ylabel(, labsize(vsmall)) /// + ytitle(, size(small)) /// + legend(size(small)) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + reg_usage + reg_usage, fitsby + reg_usage_simple + reg_usage_simple, fitsby + reg_usage_simple_balanced + reg_usage_simple_balanced, fitsby + plot_snooze + plot_snooze, fitsby + plot_snooze, minutes + plot_snooze, fitsby minutes + plot_snooze_by_limit + plot_snooze_by_limit, fitsby + plot_snooze_by_limit, minutes + plot_snooze_by_limit, fitsby minutes + plot_snooze_both + plot_snooze_both, fitsby + plot_snooze_both_by_limit + plot_snooze_both_by_limit, fitsby + plot_phone_use_change + plot_phone_use_change_simple + reg_usage_interaction + reg_usage_interaction, fitsby + reg_self_control + reg_self_control_null + reg_iv_self_control + reg_usage_simple_weekly + reg_usage_simple_weekly, fitsby + reg_usage_simple_daily_p12 + reg_usage_simple_daily_p12, fitsby + reg_addiction_simple + reg_sms_addiction_simple + reg_swb_simple + reg_swb_icw_simple + reg_sms_addiction_simple_weekly + reg_substitution +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program reg_usage + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Run regressions + foreach yvar in PD_P2_Usage`fitsby' /// + PD_P3_Usage`fitsby' /// + PD_P4_Usage`fitsby' /// + PD_P5_Usage`fitsby' /// + PD_P432_Usage`fitsby' /// + PD_P5432_Usage`fitsby' { + local baseline PD_P1_Usage`fitsby' + + gen_treatment + reg_treatment, yvar(`yvar') indep($STRATA `baseline') + est store `yvar' + } + + * Plot regressions (by period) + coefplot (PD_P2_Usage`fitsby', label("Period 2") $COLOR_MAROON) /// + (PD_P3_Usage`fitsby', label("Period 3") $COLOR_BLACK) /// + (PD_P4_Usage`fitsby', label("Period 4") $COLOR_NAVY) /// + (PD_P5_Usage`fitsby', label("Period 5") $COLOR_GRAY) , /// + keep(L_*) order(L_1 L_2 L_3 L_4 L_5) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT + + graph export "output/coef_usage`suffix'.pdf", replace + + * Plot regressions (all period) + coefplot (PD_P5432_Usage`fitsby', label("Period 2 to 5") $COLOR_MAROON), /// + keep(L_*) order(L_1 L_2 L_3 L_4 L_5) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT /// + legend(off) + + graph export "output/coef_usage_combined`suffix'.pdf", replace +end + +program reg_usage_simple + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Run regressions + foreach yvar in PD_P2_Usage`fitsby' /// + PD_P3_Usage`fitsby' /// + PD_P4_Usage`fitsby' /// + PD_P5_Usage`fitsby' /// + PD_P432_Usage`fitsby' /// + PD_P5432_Usage`fitsby' { + local baseline PD_P1_Usage`fitsby' + + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + est store `yvar' + } + + * Plot regressions (by period) + coefplot (PD_P2_Usage`fitsby', label("Period 2") $COLOR_MAROON msymbol(O)) /// + (PD_P3_Usage`fitsby', label("Period 3") $COLOR_BLACK msymbol(S)) /// + (PD_P4_Usage`fitsby', label("Period 4") $COLOR_NAVY msymbol(D)) /// + (PD_P5_Usage`fitsby', label("Period 5") $COLOR_GRAY msymbol(T)), /// + keep(B L) order(B L) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT + + graph export "output/coef_usage_simple`suffix'.pdf", replace + + * Plot regressions (by period) + coefplot (PD_P2_Usage`fitsby', label("Period 2") $COLOR_MAROON msymbol(O)) /// + (PD_P3_Usage`fitsby', label("Period 3") $COLOR_MAROON msymbol(S)) /// + (PD_P4_Usage`fitsby', label("Period 4") $COLOR_MAROON msymbol(D)) /// + (PD_P5_Usage`fitsby', label("Period 5") $COLOR_MAROON msymbol(T)), /// + keep(B) order(B) /// + $COEFPLOT_SETTINGS_THIN /// + $COEFPLOT_LABELS_LIMIT + + graph export "output/coef_usage_simple`suffix'_bonus_only.pdf", replace + + * Plot regressions (by period) + coefplot (PD_P2_Usage`fitsby', label("Period 2") $COLOR_GRAY msymbol(O)) /// + (PD_P3_Usage`fitsby', label("Period 3") $COLOR_GRAY msymbol(S)) /// + (PD_P4_Usage`fitsby', label("Period 4") $COLOR_GRAY msymbol(D)) /// + (PD_P5_Usage`fitsby', label("Period 5") $COLOR_GRAY msymbol(T)), /// + keep(L) order(L) /// + ysc(r(-60 0)) /// + ylabel(-60(20)0) /// + $COEFPLOT_SETTINGS_THIN /// + $COEFPLOT_LABELS_LIMIT // + + graph export "output/coef_usage_simple`suffix'_limit_only.pdf", replace + + + * Plot regressions (all period) + coefplot (PD_P5432_Usage`fitsby', label("Period 2 to 5") $COLOR_MAROON), /// + keep(B L) order(B L) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT /// + legend(off) + + graph export "output/coef_usage_combined_simple`suffix'.pdf", replace +end + +program reg_usage_simple_balanced + syntax, [fitsby] + + est clear + + preserve + + local income 43.01 + local college 0.3009 + local male 0.4867 + local white 0.73581 + local age 47.6 + + ebalance balance_income balance_college balance_male balance_white balance_age, /// + manualtargets(`income' `college' `male' `white' `age') /// + generate(weight) + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Run regressions + foreach yvar in PD_P2_Usage`fitsby' /// + PD_P3_Usage`fitsby' /// + PD_P4_Usage`fitsby' /// + PD_P5_Usage`fitsby' /// + PD_P432_Usage`fitsby' /// + PD_P5432_Usage`fitsby' { + local baseline PD_P1_Usage`fitsby' + + gen_treatment, simple + + reg `yvar' B L $STRATA `baseline' [w=weight], robust + est store `yvar' + } + + * Plot regressions (by period) + coefplot (PD_P2_Usage`fitsby', label("Period 2") $COLOR_MAROON) /// + (PD_P3_Usage`fitsby', label("Period 3") $COLOR_BLACK) /// + (PD_P4_Usage`fitsby', label("Period 4") $COLOR_NAVY) /// + (PD_P5_Usage`fitsby', label("Period 5") $COLOR_GRAY), /// + keep(B L) order(B L) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT + + graph export "output/coef_usage_simple_balanced`suffix'.pdf", replace + + restore +end +program reg_substitution + est clear + + gen_treatment, simple + reg_treatment, yvar(S4_Substitution) indep($STRATA) simple + est store S4_Substitution + + * Plot regressions (all period) + coefplot (S4_Substitution, $COLOR_MAROON), /// + keep(B L) order(B L) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT /// + legend(off) + + graph export "output/coef_self_reported_substitution.pdf", replace + + gen_treatment, simple + reg_treatment, yvar(S4_Substitution_W) indep($STRATA) simple + est store S4_Substitution_W + + * Plot regressions (all period) + coefplot (S4_Substitution_W, $COLOR_MAROON), /// + keep(B L) order(B L) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT /// + legend(off) + + graph export "output/coef_self_reported_substitution_w.pdf", replace +end + + +program reg_usage_simple_weekly + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Run regressions + foreach yvar in PD_WeeklyUsage`fitsby'_4 /// + PD_WeeklyUsage`fitsby'_5 /// + PD_WeeklyUsage`fitsby'_6 /// + PD_WeeklyUsage`fitsby'_7 /// + PD_WeeklyUsage`fitsby'_8 /// + PD_WeeklyUsage`fitsby'_9 /// + PD_WeeklyUsage`fitsby'_10 /// + PD_WeeklyUsage`fitsby'_11 /// + PD_WeeklyUsage`fitsby'_12 /// + PD_WeeklyUsage`fitsby'_13 /// + PD_WeeklyUsage`fitsby'_14 /// + PD_WeeklyUsage`fitsby'_15 { + local baseline PD_WeeklyUsage`fitsby'_3 + + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + est store `yvar' + } + + * Plot regressions (by period) + coefplot (PD_WeeklyUsage`fitsby'_4 , label("Week 4") $COLOR_MAROON msymbol(O)) /// + (PD_WeeklyUsage`fitsby'_5 , label("Week 5") $COLOR_BLACK msymbol(S)) /// + (PD_WeeklyUsage`fitsby'_6 , label("Week 6") $COLOR_GRAY msymbol(D)) /// + (PD_WeeklyUsage`fitsby'_7 , label("Week 7") $COLOR_MAROON msymbol(O)) /// + (PD_WeeklyUsage`fitsby'_8 , label("Week 8") $COLOR_BLACK msymbol(S)) /// + (PD_WeeklyUsage`fitsby'_9 , label("Week 9") $COLOR_GRAY msymbol(D)) /// + (PD_WeeklyUsage`fitsby'_10, label("Week 10") $COLOR_MAROON msymbol(O)) /// + (PD_WeeklyUsage`fitsby'_11, label("Week 11") $COLOR_BLACK msymbol(S)) /// + (PD_WeeklyUsage`fitsby'_12, label("Week 12") $COLOR_GRAY msymbol(D)) /// + (PD_WeeklyUsage`fitsby'_13, label("Week 13") $COLOR_MAROON msymbol(O)) /// + (PD_WeeklyUsage`fitsby'_14, label("Week 14") $COLOR_BLACK msymbol(S)) /// + (PD_WeeklyUsage`fitsby'_15, label("Week 15") $COLOR_GRAY msymbol(D)), /// + keep(B L) order(B L) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT + + graph export "output/coef_usage_simple_weekly`suffix'.pdf", replace +end + +program reg_usage_simple_daily_p12 + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Run regressions + foreach day of numlist 1/42 { + local yvar PD_DailyUsage`fitsby'_`day' + + gen_treatment, suffix(`day') simple + reg_treatment, yvar(`yvar') indep($STRATA) suffix(`day') simple + est store `yvar' + } + + * Plot regressions (by period) + coefplot (PD_DailyUsage`fitsby'_1, label("Day 1") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_2, label("Day 2") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_3, label("Day 3") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_4, label("Day 4") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_5, label("Day 5") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_6, label("Day 6") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_7, label("Day 7") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_8, label("Day 8") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_9, label("Day 9") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_10, label("Day 10") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_11, label("Day 11") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_12, label("Day 12") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_13, label("Day 13") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_14, label("Day 14") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_15, label("Day 15") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_16, label("Day 16") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_17, label("Day 17") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_18, label("Day 18") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_19, label("Day 19") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_20, label("Day 20") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_21, label("Day 21") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_22, label("Day 22") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_23, label("Day 23") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_24, label("Day 24") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_25, label("Day 25") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_26, label("Day 26") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_27, label("Day 27") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_28, label("Day 28") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_29, label("Day 29") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_30, label("Day 30") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_31, label("Day 31") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_32, label("Day 32") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_33, label("Day 33") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_34, label("Day 34") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_35, label("Day 35") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_36, label("Day 36") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_37, label("Day 37") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_38, label("Day 38") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_39, label("Day 39") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_40, label("Day 40") $COLOR_NAVY ) /// + (PD_DailyUsage`fitsby'_41, label("Day 41") $COLOR_NAVY) /// + (PD_DailyUsage`fitsby'_42, label("Day 42") $COLOR_NAVY), /// + keep(B*) xline(22) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT legend(off) /// + xlabel(10 "Period 1" 22 "Survey 2" 34 "Period 2") /// + + + graph export "output/coef_usage_simple_daily_p12`suffix'.pdf", replace +end + +program reg_sms_addiction_simple_weekly + syntax + + est clear + + * Run regressions + foreach week of numlist 4/9 { + local yvar Week`week'_SMSIndex + local comparison_week = `week' - 3 + if (`comparison_week' > 3){ + local comparison_week = `week' - 6 + } + + local baseline Week`comparison_week'_SMSIndex + + gen_treatment, simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') simple + est store `yvar' + } + + * Plot regressions (by period) + coefplot (Week4_SMSIndex , label("Week 4") $COLOR_MAROON) /// + (Week5_SMSIndex , label("Week 5") $COLOR_BLACK ) /// + (Week6_SMSIndex , label("Week 6") $COLOR_GRAY ) /// + (Week7_SMSIndex , label("Week 7") $COLOR_MAROON) /// + (Week8_SMSIndex , label("Week 8") $COLOR_BLACK ) /// + (Week9_SMSIndex , label("Week 9") $COLOR_GRAY ), /// + keep(B L) order(B L) /// + $COEFPLOT_SETTINGS_MINUTES /// + $COEFPLOT_LABELS_LIMIT + + graph export "output/coef_sms_addiction_simple_weekly.pdf", replace +end + +program reg_addiction_simple + syntax + + est clear + + * Run regressions for limit + foreach num of numlist 1/16 { + local baseline S1_Addiction_`num' + + gen S43_Addiction_`num' = (S3_Addiction_`num' + S4_Addiction_`num') / 2 + local yvar S43_Addiction_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + * Run regressions for bonus + foreach num of numlist 1/16 { + local baseline S1_Addiction_`num' + + local yvar S4_Addiction_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + coefplot (S4_Addiction_*, keep(B_*) label("Bonus") mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) rename(B_S4_* = *)) /// + (S43_Addiction_*, keep(L_*) label("Limit") mcolor(gray) ciopts(recast(rcap) lcolor(gray)) rename(L_S43_* = *)), /// + $COEFPLOT_ADDICTION_SETTINGS /// + $ADDICTION_LABELS /// + yaxis(1) yscale(axis(1) range(0)) xlabel(-0.06(0.02)0.06, axis(1)) /// + ylabel(1 "Fear missing what happening online" 2 "Check social media/messages immediately after waking up" /// + 3 "Use longer than intended" 4 "Tell yourself just a few more minutes" /// + 5 "Use to distract from personal issues" 6 "Use to distract from anxiety/depression/etc." /// + 7 "Use to relax to go to sleep" 8 "Try and fail to reduce use" /// + 9 "Others are concerned about use" 10 "Feel anxious without phone" /// + 11 "Have difficulty putting down phone " 12 "Annoyed at interruption in use" /// + 13 "Use harms school/work performance" 14 "Lose sleep from use" /// + 15 "Prefer phone to human interaction" 16 "Procrastinate by using phone", /// + valuelabel angle(0)) horizontal /// + ytitle("") xtitle("Treatment effect", axis(1)) + + graph export "output/coef_addiction_simple.pdf", replace + +end + + +program reg_sms_addiction_simple + est clear + + preserve + + * Run regressions for limit + foreach num of numlist 1/9 { + local baseline S1_AddictionText_`num' + + gen S23_AddictionText_`num' = (S2_AddictionText_`num' + S3_AddictionText_`num') / 2 + local yvar S23_AddictionText_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + * Run regressions for bonus + foreach num of numlist 1/9 { + local baseline S1_AddictionText_`num' + + local yvar S3_AddictionText_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + coefplot (S3_AddictionText_*, keep(B_*) label("Bonus") mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) rename(B_S3_* = *)) /// + (S23_AddictionText_*, keep(L_*) label("Limit") mcolor(gray) ciopts(recast(rcap) lcolor(gray)) rename(L_S23_* = *)), /// + $COEFPLOT_ADDICTION_SETTINGS /// + $ADDICTION_LABELS /// + yaxis(1) yscale(axis(1) range(0)) xlabel(-0.2(0.05)0.2, axis(1)) /// + ylabel(1 "Use longer than intended" 2 "Use harms school/work performance" /// + 3 "Easy to control screen time x (-1)" 4 "Use mindlessly" /// + 5 "Use because felt down" 6 "Use kept from working on something needed" /// + 7 "Ideally used phone less" 8 "Lose sleep from use" /// + 9 "Check social media/messages immediately after waking up", /// + valuelabel angle(0)) horizontal /// + ytitle("") xtitle("Treatment effect", axis(1)) + + graph export "output/coef_sms_addiction_simple.pdf", replace + + restore +end + + +program reg_swb_simple + est clear + + preserve + + gen S1_WellBeing_8 = (S1_WellBeing_1 + S1_WellBeing_2 + S1_WellBeing_3 + S1_WellBeing_4)/4 + gen S1_WellBeing_9 = (S1_WellBeing_5 + S1_WellBeing_6 + S1_WellBeing_7)/3 + gen S3_WellBeing_8 = (S3_WellBeing_1 + S3_WellBeing_2 + S3_WellBeing_3 + S3_WellBeing_4)/4 + gen S3_WellBeing_9 = (S3_WellBeing_5 + S3_WellBeing_6 + S3_WellBeing_7)/3 + gen S4_WellBeing_8 = (S4_WellBeing_1 + S4_WellBeing_2 + S4_WellBeing_3 + S4_WellBeing_4)/4 + gen S4_WellBeing_9 = (S4_WellBeing_5 + S4_WellBeing_6 + S4_WellBeing_7)/3 + + + * Run regressions for limit + foreach num of numlist 1/9 { + local baseline S1_WellBeing_`num' + + gen S43_WellBeing_`num' = (S4_WellBeing_`num' + S3_WellBeing_`num') / 2 + local yvar S43_WellBeing_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + * Run regressions for bonus + foreach num of numlist 1/9 { + local baseline S1_WellBeing_`num' + + local yvar S4_WellBeing_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + coefplot (S4_WellBeing_*, keep(B_*) label("Bonus") mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) rename(B_S4_* = *)) /// + (S43_WellBeing_*, keep(L_*) label("Limit") mcolor(gray) ciopts(recast(rcap) lcolor(gray)) rename(L_S43_* = *)), /// + $COEFPLOT_ADDICTION_SETTINGS /// + $ADDICTION_LABELS /// + yaxis(1) yscale(axis(1) range(0)) xlabel(-0.09(0.03)0.09, axis(1)) /// + ylabel(1 "Was happy" 2 "Was satisfied with life" /// + 3 "Felt anxious x (-1)" 4 "Felt depressed x (-1)" /// + 5 "Could concentrate" 6 "Was easily distracted x (-1)" /// + 7 "Slept well" 8 "Happy <-> depressed index" /// + 9 "Concentrate <-> sleep index", /// + valuelabel angle(0)) horizontal /// + ytitle("") xtitle("Treatment effect", axis(1)) + + graph export "output/coef_swb_simple.pdf", replace + + restore +end + +program reg_swb_icw_simple + est clear + + preserve + + * Run regressions for limit + foreach num of numlist 1/7 { + local baseline S1_WellBeing_`num' + + gen S43_WellBeing_`num' = (S4_WellBeing_`num' + S3_WellBeing_`num') / 2 + local yvar S43_WellBeing_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + * Run regressions for bonus + foreach num of numlist 1/7 { + local baseline S1_WellBeing_`num' + + local yvar S3_WellBeing_`num' + + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + foreach idx in HSAD CDS { + local baseline S1_index_`idx' + gen S43_index_`idx' = (S3_index_`idx' + S4_index_`idx') / 2 + local yvar S43_index_`idx' + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + + local yvar S3_index_`idx' + gen_treatment, suffix(_`yvar') simple + reg_treatment, yvar(`yvar') indep($STRATA `baseline') suffix(_`yvar') simple + est store `yvar' + } + + coefplot (S3_WellBeing_* S3_index_*, keep(B_*) label("Bonus") mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) rename(B_S3_* = *)) /// + (S43_WellBeing_* S43_index_*, keep(L_*) label("Limit") mcolor(gray) ciopts(recast(rcap) lcolor(gray)) rename(L_S43_* = *)), /// + $COEFPLOT_ADDICTION_SETTINGS /// + $ADDICTION_LABELS /// + yscale(axis(1) range(0)) xlabel(-0.09(0.03)0.09, axis(1)) /// + horizontal /// + xtitle("Treatment effect", axis(1)) /// + group(*index*="", nolabels) /// + ylabel(1 "Was happy" 2 "Was satisfied with life" /// + 3 "Felt anxious x (-1)" 4 "Felt depressed x (-1)" /// + 5 "Could concentrate" 6 "Was easily distracted x (-1)" /// + 7 "Slept well" /// + 9 "Happy, satisfied, anxious, depressed index" /// + 10 "Concentrate, distracted, sleep index", /// + valuelabel angle(0)) + + + graph export "output/coef_swb_icw_simple.pdf", replace + + restore +end + +program plot_snooze + syntax, [fitsby] [minutes] + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Determine snooze measure + if ("`minutes'" == "minutes") { + local measure "Min_W" + local root "min" + local ytitle "(minutes/day)" + } + else { + local measure "Count" + local root "count" + local ytitle "(count/day)" + } + + * Preserve data + preserve + + * Reshape data + keep UserID PD_*Snooze`measure'`fitsby' + rename_but, varlist(UserID) prefix(snooze) + reshape long snooze, i(UserID) j(measure) string + + * Recode data + encode measure, generate(measure_encode) + + recode measure_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (5 = 3 "Period 4") /// + (7 = 4 "Period 5") /// + (4 = 5 "Periods 3 & 4") /// + (3 = 6 "Periods 2 to 4") /// + (6 = 7 "Periods 2 to 5"), /// + gen(measure_recode) + + * Plot data + gen dummy = 1 + + cispike snooze if measure_recode <= 4, /// + over1(dummy) over2(measure_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Snooze use `ytitle'" " ") /// + legend(off)) + + graph export "output/cispike_snooze_`root'`suffix'.pdf", replace + + * Restore data + restore +end + +program plot_snooze_both + syntax, [fitsby] + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + local ylabel2 0(8)40 + local ylabel1 0(.1).5 + } + else { + local fitsby "" + local suffix "" + local ylabel2 0(8)40 + local ylabel1 0(.1).5 + } + + * Preserve data + preserve + + * Reshape data + keep UserID PD_*SnoozeCount`fitsby' *SnoozeMin_W`fitsby' + rename PD_*Snooze*`fitsby' ** + rename_but, varlist(UserID) prefix(snooze) + reshape long snooze, i(UserID) j(measure) string + + split measure, p("_") + drop measure + rename (measure1 measure2) (time measure) + + * Recode data + encode time, generate(time_encode) + encode measure, generate(measure_encode) + + recode time_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (3 = 3 "Period 4") /// + (6 = 4 "Period 5") /// + (4 = 5 "Periods 3 & 4") /// + (5 = 6 "Periods 2 to 4") /// + (7 = 7 "Periods 2 to 5"), /// + gen(time_recode) + + recode measure_encode /// + (1 = 1 "Snoozes per day") /// + (2 = 2 "Snooze minutes per day"), /// + gen(measure_recode) + + * Plot data + + // Manually set labels and legends for double axis figures + cispike snooze if time_recode <= 3, /// + over1(measure_recode) over2(time_recode) /// + $CISPIKE_DOUBLE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(`ylabel1', axis(1)) /// + ylabel(`ylabel2', axis(2)) /// + ytitle("Snoozes per day" " ", axis(1)) /// + ytitle(" " "Snooze minutes per day", axis(2)) /// + legend(order(4 "Snoozes per day" 10 "Snooze minutes per day"))) + + graph export "output/cispike_snooze_both`suffix'.pdf", replace + + * Restore data + restore +end + +program plot_snooze_by_limit + syntax, [fitsby] [minutes] + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Determine snooze measure + if ("`minutes'" == "minutes") { + local measure "Min_W" + local root "min" + local ytitle "(minutes/day)" + } + else { + local measure "Count" + local root "count" + local ytitle "(count/day)" + } + + * Preserve data + preserve + + * Reshape data + keep UserID S2_LimitType PD_*Snooze`measure'`fitsby' + rename_but, varlist(UserID S2_LimitType) prefix(snooze) + reshape long snooze, i(UserID S2_LimitType) j(measure) string + + * Recode data + encode measure, generate(measure_encode) + + recode measure_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (5 = 3 "Period 4") /// + (7 = 4 "Period 5") /// + (4 = 5 "Periods 3 & 4") /// + (3 = 6 "Periods 2 to 4") /// + (6 = 7 "Periods 2 to 5"), /// + gen(measure_recode) + + recode S2_LimitType /// + (0 = .) /// + (1 = 1 "Snooze 0") /// + (2 = 2 "Snooze 2") /// + (3 = 3 "Snooze 5") /// + (4 = 4 "Snooze 20") /// + (5 = .), /// + gen(S2_LimitType_recode) + + * Plot data (by period) + cispike snooze if measure_recode <= 3, /// + over1(measure_recode) over2(S2_LimitType_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Snooze use `ytitle'" " ")) + + graph export "output/cispike_snooze_`root'_by_limit`suffix'.pdf", replace + + * Plot data (all periods) + cispike snooze if measure_recode == 5, /// + over1(measure_recode) over2(S2_LimitType_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Snooze use `ytitle'" " ") /// + legend(off)) + + graph export "output/cispike_snooze_`root'_combined_by_limit`suffix'.pdf", replace + + + * Restore data + restore +end + +program plot_snooze_both_by_limit + syntax, [fitsby] + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + local ylabel2 0(12)60 + local ylabel1 0(.3)1.5 + } + else { + local fitsby "" + local suffix "" + local ylabel2 0(12)60 + local ylabel1 0(.3)1.5 + } + + * Preserve data + preserve + + * Reshape data + keep UserID S2_LimitType PD_*SnoozeCount`fitsby' *SnoozeMin_W`fitsby' + rename PD_*Snooze*`fitsby' ** + rename_but, varlist(UserID S2_LimitType) prefix(snooze) + reshape long snooze, i(UserID S2_LimitType) j(measure) string + + split measure, p("_") + drop measure + rename (measure1 measure2) (time measure) + + * Recode data + encode time, generate(time_encode) + encode measure, generate(measure_encode) + + recode S2_LimitType /// + (0 = .) /// + (1 = 1 "Snooze 0") /// + (2 = 2 "Snooze 2") /// + (3 = 3 "Snooze 5") /// + (4 = 4 "Snooze 20") /// + (5 = .), /// + gen(S2_LimitType_recode) + + recode time_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (3 = 3 "Period 4") /// + (6 = 4 "Period 5") /// + (4 = 5 "Periods 3 & 4") /// + (5 = 6 "Periods 2 to 4") /// + (7 = 7 "Periods 2 to 5"), /// + gen(time_recode) + + recode measure_encode /// + (1 = 1 "Snoozes per day") /// + (2 = 2 "Snooze minutes per day"), /// + gen(measure_recode) + + * Plot data + + // Manually set labels and legends for double axis figures + + * Plot data (all periods) + cispike snooze if time_recode == 6, /// + over1(measure_recode) over2(S2_LimitType_recode) /// + $CISPIKE_DOUBLE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(`ylabel1', axis(1)) /// + ylabel(`ylabel2', axis(2)) /// + ytitle("Snoozes per day" " ", axis(1)) /// + ytitle(" " "Snooze minutes per day", axis(2)) /// + legend(order(5 "Snoozes per day" 13 "Snooze minutes per day"))) + + graph export "output/cispike_snooze_both_combined_by_limit`suffix'.pdf", replace + + * Restore data + restore +end + +program plot_phone_use_change + * Preserve data + preserve + + * Reshape data + keep UserID S2_LimitType *PhoneUseChange + rename_but, varlist(UserID S2_LimitType) prefix(phone_use) + reshape long phone_use, i(UserID S2_LimitType) j(measure) string + + * Recode data + encode measure, generate(measure_encode) + + recode measure_encode /// + (1 = 1 "Survey 1") /// + (2 = 2 "Survey 3") /// + (3 = 3 "Survey 4"), /// + gen(measure_recode) + + recode S2_LimitType /// + (0 = 0 "Control") /// + (1 = 1 "Snooze 0") /// + (2 = 2 "Snooze 2") /// + (3 = 3 "Snooze 5") /// + (4 = 4 "Snooze 20") /// + (5 = 5 "No snooze"), /// + gen(S2_LimitType_recode) + + * Plot data + cispike phone_use, /// + over1(measure_recode) over2(S2_LimitType_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Phone use change (percent)" " ") /// + yline(0, lwidth(thin) lcolor(black))) + + graph export "output/cispike_phone_use.pdf", replace + + * Restore data + restore +end + +program plot_phone_use_change_simple + * Preserve data + preserve + + * Reshape data + keep UserID S2_LimitType S3_Bonus *PhoneUseChange + rename_but, varlist(UserID S2_LimitType S3_Bonus) prefix(phone_use) + reshape long phone_use, i(UserID S2_LimitType S3_Bonus) j(measure) string + + * Recode data + encode measure, generate(measure_encode) + + recode measure_encode /// + (1 = 1 "Survey 1") /// + (2 = 2 "Survey 3") /// + (3 = 3 "Survey 4"), /// + gen(measure_recode) + + gen treatment = . + replace treatment = 0 if S2_LimitType == 0 & S3_Bonus == 0 + replace treatment = 1 if S2_LimitType == 0 & S3_Bonus == 1 + replace treatment = 2 if S2_LimitType != 0 & S3_Bonus == 0 + replace treatment = 3 if S2_LimitType != 0 & S3_Bonus == 1 + + recode treatment /// + (0 = 0 "Control") /// + (1 = 1 "Bonus only") /// + (2 = 2 "Limit only") /// + (3 = 3 "Bonus and limit"), /// + gen(treatment_recode) + + * Plot data + cispike phone_use, /// + over1(measure_recode) over2(treatment_recode) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Phone use change (percent)" " ") /// + yline(0, lwidth(thin) lcolor(black))) + + graph export "output/cispike_phone_use_simple.pdf", replace + + * Restore data + restore +end + +program reg_usage_interaction + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Run regressions + foreach yvar in PD_P2_Usage`fitsby' /// + PD_P3_Usage`fitsby' /// + PD_P4_Usage`fitsby' /// + PD_P5_Usage`fitsby' { + local baseline PD_P1_Usage`fitsby' + + gen_interaction + reg_interaction, yvar(`yvar') indep($STRATA `baseline') + est store `yvar' + } + + * Plot regressions + coefplot (PD_P2_Usage`fitsby', label("Period 2") $COLOR_MAROON msymbol(O)) /// + (PD_P3_Usage`fitsby', label("Period 3") $COLOR_BLACK msymbol(S)) /// + (PD_P4_Usage`fitsby', label("Period 4") $COLOR_NAVY msymbol(D)) /// + (PD_P5_Usage`fitsby', label("Period 5") $COLOR_GRAY msymbol(T)), /// + keep(B_* L_*) order(B_1 L_1 B_L_1_1) /// + $COEFPLOT_SETTINGS_MINUTES + + graph export "output/coef_usage_interaction`suffix'.pdf", replace +end + +program reshape_self_control_outcomes + * Reshape wide to long + gen S4_Usage_FITSBY = PD_P3_UsageFITSBY + gen S3_Usage_FITSBY = PD_P2_UsageFITSBY + + keep UserID S3_Bonus S2_LimitType Stratifier /// + S*_Usage_FITSBY /// + S*_PhoneUseChange_N /// + S*_AddictionIndex_N /// + S*_SMSIndex_N /// + S*_SWBIndex_N /// + S*_LifeBetter_N /// + S*_index_well_N + + local indep UserID S3_Bonus S2_LimitType Stratifier S1_* + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(`indep') j(measure) string + + split measure, p(_) + replace measure = measure2 + "_" + measure3 + "_" + measure4 if measure4 != "" + replace measure = measure2 + "_" + measure3 if measure4 == "" + rename measure1 survey + drop measure2 measure3 measure4 + + * Reshape long to wide + reshape wide outcome, i(UserID survey) j(measure) string + rename outcome* * + + * Recode data + encode survey, gen(S) + + * Label data + label var PhoneUseChange "Ideal use change" + label var AddictionIndex "Addiction scale x (-1)" + label var SMSIndex "SMS addiction scale x (-1)" + label var LifeBetter "Phone makes life better" + label var SWBIndex "Subjective well-being" + label var index_well "Survey index" +end + +program gen_coefficient + syntax, var(str) suffix(str) label_var(str) + + cap drop C`suffix' + gen C`suffix' = `var' + + local vlabel: variable label `label_var' + label var C`suffix' "`vlabel'" +end + +program reg_self_control + est clear + + * Preserve data + preserve + + * Reshape data + reshape_self_control_outcomes + + * Specify regression + local yvarset /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + * Run regressions + foreach yvar in `yvarset' { + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, suffix(_`yvar') simple + cap drop B3_`yvar' + cap drop B4_`yvar' + gen B3_`yvar' = B_`yvar' * (S == 1) + gen B4_`yvar' = B_`yvar' * (S == 2) + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Limit + gen_coefficient, var(L_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' C_`yvar' B3_`yvar' B4_`yvar' `indep', robust cluster(UserID) + est store L_`yvar' + + * Bonus + gen_coefficient, var(B4_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' L_`yvar' B3_`yvar' C_`yvar' `indep', robust cluster(UserID) + est store B_`yvar' + } + + * Plot regressions + coefplot (B_*, label("Bonus") $COLOR_MAROON) /// + (L_*, label("Limit") $COLOR_GRAY), /// + keep(C_*) /// + $COEFPLOT_SETTINGS_ITT + + graph export "output/coef_self_control.pdf", replace + + * Restore data + restore +end + +program reg_self_control_null + est clear + + * Preserve data + preserve + + * Reshape data + reshape_self_control_outcomes + + * Specify regression + local yvarset /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + * Run regressions + foreach yvar in `yvarset' { + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, suffix(_`yvar') simple + cap drop B3_`yvar' + cap drop B4_`yvar' + cap drop L3_`yvar' + cap drop L4_`yvar' + gen B3_`yvar' = B_`yvar' * (S == 1) + gen B4_`yvar' = B_`yvar' * (S == 2) + gen L3_`yvar' = L_`yvar' * (S == 1) + gen L4_`yvar' = L_`yvar' * (S == 2) + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Limit + gen_coefficient, var(L3_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' C_`yvar' B3_`yvar' B4_`yvar' L4_`yvar' `indep', robust cluster(UserID) + est store L3_`yvar' + + gen_coefficient, var(L4_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' C_`yvar' B3_`yvar' B4_`yvar' L3_`yvar' `indep', robust cluster(UserID) + est store L4_`yvar' + + * Bonus + gen_coefficient, var(B3_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' C_`yvar' L_`yvar' B4_`yvar' `indep', robust cluster(UserID) + est store B3_`yvar' + + + gen_coefficient, var(B4_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' C_`yvar' L_`yvar' B3_`yvar' `indep', robust cluster(UserID) + est store B4_`yvar' + } + + * Plot regressions + coefplot (B3_*, label("Bonus: Survey 3") $COLOR_MAROON_LIGHT msymbol(o)) /// + (B4_*, label("Bonus: Survey 4") $COLOR_MAROON_DARK msymbol(s)) /// + (L3_*, label("Limit: Survey 3") $COLOR_GRAY_LIGHT msymbol(o)) /// + (L4_*, label("Limit: Survey 4") $COLOR_GRAY_DARK msymbol(s)), /// + keep(C_*) /// + $COEFPLOT_SETTINGS_ITT /// + $ADDICTION_LABELS + + graph export "output/coef_self_control_null.pdf", replace + + * Restore data + restore +end + +program reg_iv_self_control + est clear + + * Preserve data + preserve + + * Reshape data + reshape_self_control_outcomes + + * Specify regression + local yvarset /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + * Run regressions + foreach yvar in `yvarset' { + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, suffix(_`yvar') simple + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Run regression + gen_usage_stacked, yvar(`yvar') suffix(_`yvar') var(`yvar') + reg_usage_stacked, yvar(`yvar') suffix(_`yvar') indep(`indep') + est store U_`yvar' + } + + * Plot regressions + coefplot (U_*, $COLOR_NAVY), /// + keep(U_*) /// + $COEFPLOT_SETTINGS_STD /// + legend(off) + + graph export "output/coef_iv_self_control.pdf", replace + + * Restore data + restore +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/treatment_effects/code/FDRTable.do b/17/replication_package/code/analysis/treatment_effects/code/FDRTable.do new file mode 100644 index 0000000000000000000000000000000000000000..de30379bc3d5ca76bab83d3e7eb17cae15fe4400 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/FDRTable.do @@ -0,0 +1,252 @@ +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +program main + define_constants + import_data + run_regs + + create_pval_tables + create_pval_tables, limit +end + +program define_constants + yaml read YAML using "input/config.yaml" + yaml global STRATA = YAML.metadata.strata +end + +program import_data + use "input/final_data_sample.dta", clear + gen_treatment, simple +end + +program latex + syntax, name(str) value(str) + + local command = "\newcommand{\\`name'}{`value'}" + + file open scalars using "output/scalars.tex", write append + file write scalars `"`command'"' _n + file close scalars +end + +program latex_precision + syntax, name(str) value(str) digits(str) + + autofmt, input(`value') dec(`digits') strict + local value = r(output1) + + latex, name(`name') value(`value') +end + +program reshape_self_control_outcomes + * Reshape wide to long + gen S4_Usage_FITSBY = PD_P3_UsageFITSBY + gen S3_Usage_FITSBY = PD_P2_UsageFITSBY + + keep UserID S3_Bonus S2_LimitType Stratifier /// + S*_Usage_FITSBY /// + S*_PhoneUseChange_N /// + S*_AddictionIndex_N /// + S*_SMSIndex_N /// + S*_SWBIndex_N /// + S*_LifeBetter_N /// + S*_index_well_N + + local indep UserID S3_Bonus S2_LimitType Stratifier S1_* + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(`indep') j(measure) string + + split measure, p(_) + replace measure = measure2 + "_" + measure3 + "_" + measure4 if measure4 != "" + replace measure = measure2 + "_" + measure3 if measure4 == "" + rename measure1 survey + drop measure2 measure3 measure4 + + * Reshape long to wide + reshape wide outcome, i(UserID survey) j(measure) string + rename outcome* * + + * Recode data + encode survey, gen(S) + + * Label data + label var PhoneUseChange "Ideal use change" + label var AddictionIndex "Addiction scale x (-1)" + label var SMSIndex "SMS addiction scale x (-1)" + label var LifeBetter "Phone makes life better" + label var SWBIndex "Subjective well-being" + label var index_well "Survey index" + +end + +program make_treatment_indicators + * Hacky way to not have LifeBetter be dropped + gen alt_LifeBetter_N = LifeBetter_N + * Treatment indicators + gen_treatment, simple + cap drop LifeBetter_N + gen LifeBetter_N = alt_LifeBetter_N + label var LifeBetter_N "Phone makes life better" +end + +program run_regs + * Reshape data + reshape_self_control_outcomes + + local swb_vars /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + make_treatment_indicators + + cap drop B3 + cap drop B4 + gen B3 = B * (S == 1) + gen B4 = B * (S == 2) + replace B = B4 + + * Run regressions + foreach yvar in `swb_vars' { + local baseline = "S1_`yvar'" + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Limit + reg `yvar' B B4 L `indep', robust cluster(UserID) + est store `yvar' + } +end + +program create_pval_tables + syntax, [limit] + + if ("`limit'" == "limit") { + local T B + local Survey "S34" + local file_suffix "limit" + } + else { + local T L + local Survey "S3" + local file_suffix "bonus" + } + + local swb_vars /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + local mat_length = 0 + foreach var in `swb_vars' { + local mat_length = `mat_length' + 1 + } + + foreach matname in sd count mean Var min max sum range { + mat `matname'_swb = J(1,`mat_length',.) + mat rownames `matname'_swb = `matname' + mat colnames `matname'_swb = `swb_vars' + } + + local mat_length = 0 + foreach var in `swb_vars' { + local mat_length = `mat_length' + 1 + } + mat pvalues = J(1,`mat_length',.) + + ** Make descriptive stats and estimate tables + local pvalue_counter = 1 + foreach varset in swb_vars { + local suffix swb + local mat_counter = 1 + + foreach yvar in ``varset'' { + est restore `yvar' + mat count_`suffix'[1, `mat_counter'] = e(N) + mat mean_`suffix'[1, `mat_counter'] = _b[`T'] + mat Var_`suffix'[1, `mat_counter'] = _se[`T'] + local pvalue = 2 * ttail(e(N) - e(df_m), abs(_b[`T']/_se[`T'])) + + est restore `yvar' + mat min_`suffix'[1, `mat_counter'] = _b[`T'] + mat max_`suffix'[1, `mat_counter'] = _se[`T'] + mat sum_`suffix'[1, `mat_counter'] = `pvalue' + mat pvalues[1, `pvalue_counter'] = `pvalue' + local mat_counter = `mat_counter' + 1 + local pvalue_counter = `pvalue_counter' + 1 + } + } + + clear + + mat pvalues = pvalues' + svmat float pvalues, name(pval) + + do "../../lib/stata/SharpenPValues.do" + + * Note that SWB index is the fifth variable + * Save SWB index FDR sharpened q value as a scalar + local fdr_val = bky06_qval[5] + *latex_precision, name(`file_suffix'SWBfdr) value(`fdr_val') digits(2) + + mkmat bky06_qval, matrix(sharpened_vals) + mat sharpened_vals = sharpened_vals' + + import_data + reshape_self_control_outcomes + make_treatment_indicators + + local pvalue_counter = 1 + foreach varset in swb_vars { + local suffix swb + + local mat_counter = 1 + foreach yvar in ``varset'' { + mat range_`suffix'[1, `mat_counter'] = sharpened_vals[1, `pvalue_counter'] + local mat_counter = `mat_counter' + 1 + local pvalue_counter = `pvalue_counter' + 1 + } + + estpost tabstat ``varset'' if `T'==0, statistics(mean, sd, max, min, count) columns(statistics) + foreach value in count { + estadd mat `value' = `value'_`suffix', replace + } + est store `varset' + + estpost tabstat ``varset'', statistics(mean, Var, max, min, sum, range) columns(statistics) + foreach value in mean Var max min sum range { + estadd mat `value' = `value'_`suffix', replace + } + est store `varset'_reg + + esttab `varset' using "output/`varset'_descriptive_stats_`file_suffix'.tex", /// + label cells((mean(fmt(%8.2fc)) sd(fmt(%8.2fc)) min(fmt(%8.0fc)) max(fmt(%8.0fc)) count(fmt(%8.0fc)))) /// + collabels("\shortstack{Mean}" "\shortstack{Standard\\deviation}" "\shortstack{Minimum\\value}" "\shortstack{Maximum\\value}" "\shortstack{N in\\regression}") /// + noobs replace nomtitle nonumbers compress + + esttab `varset'_reg using "output/`varset'_estimates_`file_suffix'.tex", /// + label cells((mean(fmt(%8.2fc)) Var(fmt(%8.2fc)) min(fmt(%8.2fc)) max(fmt(%8.2fc)) sum(fmt(%8.2fc)) range(fmt(%8.2fc)))) /// + collabels("\shortstack{(1)\\Treatment\\effect\\(original\\units)}" "\shortstack{(2)\\Standard\\error\\(original\\units)}" "\shortstack{(3)\\Treatment\\effect\\(SD units)}" /// + "\shortstack{(4)\\Standard\\error\\(SD units)}" "\shortstack{(5)\\P-value}" "\shortstack{(6)\\Sharpened\\FDR-\\adjusted\\q-value}") /// + noobs replace nomtitle nonumbers compress + } +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/treatment_effects/code/HabitFormation.do b/17/replication_package/code/analysis/treatment_effects/code/HabitFormation.do new file mode 100644 index 0000000000000000000000000000000000000000..57d37e04807176b576895e1936f6f98420e8b30c --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/HabitFormation.do @@ -0,0 +1,121 @@ +// Habit formation and naivete + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" + yaml global STRATA = YAML.metadata.strata +end + +program define_plot_settings + global COLOR_MAROON /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) + + global COLOR_BLACK /// + mcolor(black) ciopts(recast(rcap) lcolor(black)) + + global COLOR_GRAY /// + mcolor(gray) ciopts(recast(rcap) lcolor(gray)) + + global COLOR_NAVY /// + mcolor(navy) ciopts(recast(rcap) lcolor(navy)) + + global COEFPLOT_SETTINGS_MINUTES /// + vertical /// + yline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) /// + legend(cols(3) region(lcolor(white))) /// + xsize(6.5) ysize(4.5) /// + ytitle("Treatment effect (minutes/day)" " ") /// + coeflabels(B_P3 = `"Period 3"' /// + B_P4 = `"Period 4"' /// + B_P5 = `"Period 5"') + + global COEFPLOT_SETTINGS_MINUTES_DOUBLE /// + vertical /// + yline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) /// + xsize(6.5) ysize(4.5) /// + ytitle("Treatment effect (minutes/day)" " ") /// + ytitle("Treatment effect (ICW index)" " ", axis(2)) /// + coeflabels(B_P3 = `"Period 3"' /// + B_P4 = `"Period 4"' /// + B_P5 = `"Period 5"') /// + legend(cols(1) region(lcolor(white))) /// + ylabel(-60(30)60) /// + ylabel(-0.1(0.05)0.1, axis(2)) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + survey_effects_rsi +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program survey_effects_rsi + preserve + + * Clean data + rename PD_*_UsageFITSBY UsageActual_* + rename S3_PredictUseNext_1_W UsagePredicted_P3 + rename S3_PredictUseNext_2_W UsagePredicted_P4 + rename S3_PredictUseNext_3_W UsagePredicted_P5 + + * Run regressions + foreach yvar in UsageActual { + foreach survey in P3 P4 P5 { + local baseline `yvar'_P1 + + gen_treatment, suffix(_`survey') + reg_treatment, yvar(`yvar'_`survey') suffix(_`survey') indep($STRATA `baseline') + est store `yvar'_`survey' + } + } + + foreach yvar in UsagePredicted { + foreach survey in P3 P4 P5 { + local baseline UsageActual_P1 + + gen_treatment, suffix(_`survey') + reg_treatment, yvar(`yvar'_`survey') suffix(_`survey') indep($STRATA `baseline') + est store `yvar'_`survey' + } + } + + * Plot regressions (by period) + coefplot (UsageActual*, label("Actual use") $COLOR_MAROON) /// + (UsagePredicted*, label("Predicted use") $COLOR_GRAY), /// + keep(B_*) /// + $COEFPLOT_SETTINGS_MINUTES + + restore + + graph export "output/habit_formation_fitsby.pdf", replace + +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/treatment_effects/code/Heterogeneity.do b/17/replication_package/code/analysis/treatment_effects/code/Heterogeneity.do new file mode 100644 index 0000000000000000000000000000000000000000..cb4d230743eecbc9afc306cebc4cce5c1e129964 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/Heterogeneity.do @@ -0,0 +1,963 @@ +// Heterogeneity + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" + yaml global STRATA = YAML.metadata.strata + + global app_list Facebook Instagram Twitter Snapchat Browser YouTube Other +end + +program define_plot_settings + global CISPIKE_VERTICAL_GRAPHOPTS /// + ylabel(#6) /// + xsize(6.5) ysize(4.5) /// + legend(cols(3)) + + global CISPIKE_HORIZONTAL_GRAPHOPTS /// + xlabel(#6) /// + xsize(6.5) ysize(6.5) + + global CISPIKE_STACKED_GRAPHOPTS /// + xcommon row(2) /// + graphregion(color(white)) /// + xsize(6.5) ysize(8) + + global CISPIKE_SETTINGS /// + spikecolor(maroon black gray) /// + cicolor(maroon black gray) + + global COLOR_MAROON /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) + + global COLOR_LIGHT_RED /// + mcolor(maroon*0.7) ciopts(recast(rcap) lcolor(maroon*0.7)) + + global COLOR_DARK_RED /// + mcolor(maroon*1.3) ciopts(recast(rcap) lcolor(maroon*1.3)) + + global COLOR_LIGHT_GREY /// + mcolor(gray*0.8) ciopts(recast(rcap) lcolor(gray*0.8)) + + global COLOR_DARK_GREY /// + mcolor(gray*1.3) ciopts(recast(rcap) lcolor(gray*1.3)) + + global COLOR_DARK_GREEN /// + mcolor(teal) ciopts(recast(rcap) lcolor(teal)) + + global COLOR_LIGHT_GREEN /// + mcolor(eltgreen) ciopts(recast(rcap) lcolor(eltgreen)) + + global COLOR_BLACK /// + mcolor(black) ciopts(recast(rcap) lcolor(black)) + + global COLOR_GRAY /// + mcolor(gray) ciopts(recast(rcap) lcolor(gray)) + + global COEFPLOT_VERTICAL_SETTINGS /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) /// + yline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) /// + legend(rows(1) region(lcolor(white))) /// + xsize(8) ysize(4) /// + ytitle("Treatment effect (minutes/day)" " ") + + global COEFPLOT_HORIZONTAL_HTE_SETTINGS /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(cols(1) region(lcolor(white))) /// + xsize(6.5) ysize(6.5) + + global COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(rows(1) region(lcolor(white))) /// + xsize(6.5) ysize(6.5) + + global SMALL_LABELS /// + xlabel(, labsize(small)) /// + xtitle(, size(small)) /// + ylabel(, labsize(small)) /// + ytitle(, size(small)) /// + legend(size(small)) + + global COEF_SMALL_LABELS /// + coeflabels(, labsize(small)) /// + $SMALL_LABELS +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + get_temptation_ranks + get_usage_ranks + plot_temptation + reg_usage_by_app + reg_usage_by_app_combined + plot_limit_tight_by_app + reg_usage_by_time + reg_usage_by_time, fitsby + reg_usage_by_time_scaled + reg_usage_by_time_scaled, fitsby + reg_usage_by_person + reg_usage_by_person_p3 + reg_usage_by_person, fitsby + reg_usage_by_person_p3, fitsby + reg_iv_stacked_by_person + plot_wtp_motivation + plot_limit_wtp +end + +program import_data + use "input/final_data_sample.dta", clear + rename S1_IdealApp_Messenger S1_IdealApp_Messaging +end + +program reshape_ideal_use + * Reshape data + keep UserID S1_IdealApp_* + reshape long S1_IdealApp_, i(UserID) j(app) string + + * Recode data + encode app, generate(app_encode) + + recode S1_IdealApp_ /// + (1 = -75 ) /// + (2 = -37.5) /// + (3 = -12.5) /// + (4 = 0 ) /// + (5 = 12.5) /// + (6 = 37.5) /// + (7 = 75 ) /// + (8 = 0 ), /// + gen(S1_IdealApp_recode) +end + +program get_usage_ranks + * Preserve data + preserve + + * Reshape data + keep UserID PD_P1_Usage_* + + drop PD_P1_Usage_H* + + reshape long PD_P1_Usage_, i(UserID) j(app) s + replace PD_P1_Usage_ = 0 if PD_P1_Usage_ == . + + * Get temptation rankings + collapse (mean) PD_P1_Usage_, by(app) + drop if app == "Other" + gsort -PD_P1_Usage_ + gen app_rank = _n + + * Append other last + set obs `=_N+1' + replace app = "Other" if app == "" + replace app_rank = `=_N' if app_rank == . + labmask app_rank, values(app) + + * Categorize apps + gen category = 2 + replace category = 1 if /// + inlist(app, "Facebook", "Instagram", "Twitter", "Snapchat", "Browser", "YouTube") + + * Save data + keep app app_rank category + save "temp/app_rank_usage.dta", replace + + * Restore data + restore +end + +program get_temptation_ranks + * Preserve data + preserve + + * Reshape data + reshape_ideal_use + + * Get temptation rankings + collapse (mean) S1_IdealApp_recode, by(app) + gsort +S1_IdealApp_recode + gen app_rank = _n + + * Append other last + set obs `=_N+1' + replace app = "Other" if app == "" + replace app_rank = `=_N' if app_rank == . + labmask app_rank, values(app) + + * Categorize apps + gen category = 2 + replace category = 1 if /// + inlist(app, "Facebook", "Instagram", "Twitter", "Snapchat", "Browser", "YouTube") + + * Save data + keep app app_rank category + save "temp/app_rank.dta", replace + + * Restore data + restore +end + +program gen_rank_labels + syntax, [prefix(str) suffix(str)] + + * Preserve data + preserve + + * Import ranks + use "temp/app_rank.dta", clear + + global rank_labels "" + local N = _N + + forvalues i = 1/`N' { + local app = app[`i'] + global rank_labels "$rank_labels `prefix'`app'`suffix'" + } + + * Restore data + restore +end + +program gen_rank_labels_usage + syntax, [prefix(str) suffix(str)] + + * Preserve data + preserve + + * Import ranks + use "temp/app_rank_usage.dta", clear + + global rank_labels_usage "" + local N = _N + + forvalues i = 1/`N' { + local app = app[`i'] + global rank_labels_usage "$rank_labels_usage `prefix'`app'`suffix'" + } + + * Restore data + restore +end + +program plot_temptation + * Preserve data + preserve + + * Reshape data + reshape_ideal_use + + * Merging in rankings + merge m:1 app using "temp/app_rank.dta", nogen assert(2 3) keep(3) + + * Plot data (app categories together) + gen dummy = 1 + + //TODO: fix this bug + cispike S1_IdealApp_recode, /// + over1(dummy) over2(app_rank) /// + horizontal missing reverse $CISPIKE_SETTINGS /// + graphopts($CISPIKE_HORIZONTAL_GRAPHOPTS /// + ylabel(none, axis(2)) /// + xtitle(" " "Ideal use change (percent)") /// + legend(off) /// + $SMALL_LABELS) + + graph export "output/overuse_by_app.pdf", replace + + * Plot data (app categories separately) + cispike S1_IdealApp_recode if category == 1, /// + over1(dummy) over2(app_rank) over3(category) /// + horizontal missing reverse $CISPIKE_SETTINGS /// + graphopts($CISPIKE_HORIZONTAL_GRAPHOPTS /// + ylabel(none, axis(2)) /// + xtitle("") /// + legend(off) fysize(45)) + + graph save "output/overuse_by_app_fitsby.gph", replace + + cispike S1_IdealApp_recode if category == 2, /// + over1(dummy) over2(app_rank) over3(category) /// + horizontal missing reverse $CISPIKE_SETTINGS /// + graphopts($CISPIKE_HORIZONTAL_GRAPHOPTS /// + ylabel(none, axis(2)) /// + xtitle(" " "Ideal use change (percent)") /// + legend(off)) + + graph save "output/overuse_by_app_non_fitsby.gph", replace + + graph combine /// + "output/overuse_by_app_fitsby.gph" /// + "output/overuse_by_app_non_fitsby.gph", /// + $CISPIKE_STACKED_GRAPHOPTS + + graph export "output/overuse_by_app_stacked.pdf", replace + + * Restore data + restore +end + +program reg_usage_by_app + est clear + + foreach app in $app_list { + * Specify regression + cap drop `app' + cap gen `app' = PD_P5432_Usage_`app' + label var `app' "`app'" + local yvar `app' + local baseline PD_P1_Usage_`app' + + * Run regression + gen_treatment, suffix(_`yvar') var(`yvar') simple + reg_treatment, yvar(`yvar') suffix(_`yvar') indep($STRATA `baseline') simple + est store Limit_Est_`yvar' + } + + foreach app in $app_list { + * Specify regression + cap drop `app' + cap gen `app' = PD_P3_Usage_`app' + label var `app' "`app'" + local yvar `app' + local baseline PD_P1_Usage_`app' + + * Run regression + gen_treatment, suffix(_`yvar') var(`yvar') simple + reg_treatment, yvar(`yvar') suffix(_`yvar') indep($STRATA `baseline') simple + est store Bonus_Est_`yvar' + } + + local app_list_bonus Facebook_B Instagram_B Twitter_B Snapchat_B Browser_B YouTube_B Other_B + + * Plot regressions + gen_rank_labels_usage, prefix("") + + coefplot (Bonus_Est_*, keep(B_*) label("Bonus") $COLOR_MAROON) /// + (Limit_Est_*, keep(L_*) label("Limit") $COLOR_GRAY), /// + rename(L_* = "" B_* = "") /// + order($rank_labels_usage) vertical /// + $COEFPLOT_VERTICAL_SETTINGS + + graph export "output/coef_usage_by_app.pdf", replace +end + +program reg_usage_by_app_combined + est clear + + foreach app in $app_list { + * Specify regression + cap drop `app' + cap gen `app' = PD_P432_Usage_`app' + label var `app' "`app'" + local yvar `app' + local baseline PD_P1_Usage_`app' + + * Run regression + gen_treatment_combined, suffix(_`yvar') var(`yvar') + reg_treatment_combined, yvar(`yvar') suffix(_`yvar') indep($STRATA `baseline') + est store `yvar' + } + + * Plot regressions + gen_rank_labels, prefix("C_") + + coefplot $app_list, /// + keep(C_*) order($rank_labels) vertical /// + nooffsets $COEFPLOT_VERTICAL_SETTINGS /// + legend(off) + + graph export "output/coef_usage_by_app_combined.pdf", replace +end + +program plot_limit_tight_by_app + * Preserve data + preserve + + * Make zero in areas where not all zeros + foreach time in P2 P3 P4 P5 P5432 P432 P43 { + foreach category in Facebook Instagram Twitter Snapchat Browser YouTube Other { + replace PD_`time'_LimitTight_`category' = 0 if PD_`time'_LimitTight != . & PD_`time'_LimitTight_`category' == . + } + } + + * Reshape data + keep UserID *LimitTight_* + + rename *LimitTight_* ** + rename_but, varlist(UserID) prefix(limit) + reshape long limit, i(UserID) j(measure) string + + split measure, p("_") + drop measure measure1 + rename (measure2 measure3) (measure app) + + * Recode data + encode measure, generate(measure_encode) + + merge m:1 app using "temp/app_rank_usage.dta", nogen keep(3) + + recode measure_encode /// + (1 = 1 "Period 2") /// + (2 = 2 "Period 3") /// + (3 = 3 "Period 4") /// + (6 = 4 "Period 5") /// + (4 = 5 "Periods 3 & 4") /// + (5 = 6 "Periods 2 to 4") /// + (7 = 7 "Periods 2 to 5"), /// + gen(measure_recode) + + * Plot data (all periods together) + gen dummy = 1 + + cispike limit if measure_recode == 7, /// + over1(dummy) over2(app_rank) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Limit tightness (minutes/day)" " ") /// + legend(off)) + + graph export "output/cispike_limit_tight_combined_by_app.pdf", replace + + * Plot data (by period) + cispike limit if measure_recode <= 3, /// + over1(measure_recode) over2(app_rank) /// + $CISPIKE_SETTINGS /// + graphopts($CISPIKE_VERTICAL_GRAPHOPTS /// + ytitle("Limit tightness (minutes/day)" " ")) + + graph export "output/cispike_limit_tight_by_app.pdf", replace + + * Restore data + restore +end + +program reg_usage_by_time + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + foreach hour of num 1(2)23 { + * Specify regression + cap drop H_`hour' + gen H_`hour' = PD_P432_Usage`fitsby'_H`hour' + label var H_`hour' "`hour'" + local yvar H_`hour' + local baseline PD_P1_Usage`fitsby'_H`hour' + + * Run regression + gen_treatment, suffix(_`yvar') var(`yvar') simple + reg_treatment, yvar(`yvar') suffix(_`yvar') indep($STRATA `baseline') simple + est store L`yvar' + + * run bonus regressions separately + cap drop H_`hour' + gen H_`hour' = PD_P3_Usage`fitsby'_H`hour' + label var H_`hour' "`hour'" + local yvar H_`hour' + local baseline PD_P1_Usage`fitsby'_H`hour' + + * Run regression + gen_treatment, suffix(_`yvar') var(`yvar') simple + reg_treatment, yvar(`yvar') suffix(_`yvar') indep($STRATA `baseline') simple + est store B`yvar' + } + + * Plot regressions + coefplot (BH_*, keep(B_*) label("Bonus") $COLOR_MAROON) /// + (LH_*, keep(L_*) label("Limit") $COLOR_GRAY), /// + rename(L_* = "" B_* = "") vertical /// + xtitle(" " "Hour") /// + $COEFPLOT_VERTICAL_SETTINGS /// + ytitle("Treatment effect (minutes/hour)" " ") + + graph export "output/coef_usage_by_time`suffix'.pdf", replace + + * Preserve data + preserve + + * Reshape data + keep PD_P1_Usage`fitsby'_H* + collapse (mean) PD_P1_Usage`fitsby'_H* + gen dummy = 1 + reshape long PD_P1_Usage`fitsby'_H, i(dummy) j(hour) + + * Recode data + replace PD_P1_Usage`fitsby'_H = PD_P1_Usage`fitsby'_H / 2 + replace hour = (hour + 1) / 2 + + * Label data + foreach hour of num 1(2)23 { + gen H_`hour' = . + label var H_`hour' "`hour'" + } + + * Plot regressions (with usage) + + // Manually set labels and legends for double axis figures + coefplot (BH_*, keep(B_*) label("Bonus") $COLOR_MAROON) /// + (LH_*, keep(L_*) label("Limit") $COLOR_BLACK), /// + rename(L_* = "" B_* = "") vertical /// + xtitle(" " "Hour") /// + $COEFPLOT_VERTICAL_SETTINGS /// + ytitle("Treatment effect (minutes/hour)" " ", axis(1)) /// + ytitle(" " "Usage (minutes/hour)", axis(2)) /// + ylabel(-4(2)4, axis(1)) yscale(range(-4, 4)) /// + ylabel(0(0.75)3, axis(2)) /// + yscale(alt) /// + addplot(bar PD_P1_Usage`fitsby'_H hour, /// + below yaxis(2) yscale(alt axis(2)) /// + color(gray%50) fintensity(100) barw(0.75)) + + graph export "output/coef_usage_by_time_usage`suffix'.pdf", replace + + * Restore data + restore +end + +program reg_usage_by_time_scaled + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Preserve data + preserve + + foreach hour of num 1(2)23 { + display(`hour') + * Normalize usage // ASK ABOUT THIS + cap drop H_`hour' + sum PD_P432_Usage`fitsby'_H`hour' if S3_Bonus == 0 & S2_LimitType == 0 + gen H_`hour' = PD_P432_Usage`fitsby'_H`hour' / r(mean) + + * Specify regression + label var H_`hour' "`hour'" + local yvar H_`hour' + local baseline PD_P1_Usage`fitsby'_H`hour' + + * Run regression + gen_treatment, suffix(_`yvar') var(`yvar') simple + reg_treatment, yvar(`yvar') suffix(_`yvar') indep($STRATA `baseline') simple + est store L`yvar' + + * run bonus regressions separately + cap drop H_`hour' + sum PD_P3_Usage`fitsby'_H`hour' if S3_Bonus == 0 & S2_LimitType == 0 + gen H_`hour' = PD_P3_Usage`fitsby'_H`hour' / r(mean) + + * Specify regression + label var H_`hour' "`hour'" + local yvar H_`hour' + local baseline PD_P1_Usage`fitsby'_H`hour' + + * Run regression + gen_treatment, suffix(_`yvar') var(`yvar') simple + reg_treatment, yvar(`yvar') suffix(_`yvar') indep($STRATA `baseline') simple + est store B`yvar' + } + + * Plot regressions + coefplot (BH_*, keep(B_*) label("Bonus") $COLOR_MAROON) /// + (LH_*, keep(L_*) label("Limit") $COLOR_GRAY), /// + rename(L_* = "" B_* = "") vertical /// + xtitle(" " "Hour") /// + $COEFPLOT_VERTICAL_SETTINGS /// + ytitle("Treatment effect" "(share of Control group usage)" " ") + + graph export "output/coef_usage_by_time_scaled`suffix'.pdf", replace + + * Restore data + restore +end + +program reg_usage_by_person + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Specify regressions + include "input/lib/stata/define_heterogeneity.do" + + + local label_E "Education" + local label_A "Age" + local label_G "Female" + local label_U "Baseline usage" + local label_R "Restriction index" + local label_L "Addiction index" + + * Run regressions + foreach mod in /*I*/ E A G U R L { + foreach group in 0 1 { + foreach yvar in PD_P5432_Usage`FITSBY' { + local baseline PD_P1_Usage`FITSBY' + local if `mod'`group' + + gen_treatment, suffix(_`mod') simple + label var L_`mod' "`label_`mod''" + label var B_`mod' "`label_`mod''" + reg_treatment, yvar(`yvar') suffix(_`mod') indep($STRATA `baseline') if(``if'') simple + est store `yvar'_`if' + } + } + } + + * Plot regressions + coefplot (*1, label("Above median") $COLOR_DARK_GREY) /// + (*0, label("Below median") $COLOR_LIGHT_GREY), /// + keep(L_*) /// + $COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xtitle(" " "Treatment effect (minutes/day)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_limit_usage_by_heterogeneity`suffix'.pdf", replace + + coefplot (*1, label("Above median") $COLOR_DARK_RED) /// + (*0, label("Below median") $COLOR_LIGHT_RED), /// + keep(B_*) /// + $COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xtitle(" " "Treatment effect (minutes/day)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_bonus_usage_by_heterogeneity`suffix'.pdf", replace +end + +program reg_usage_by_person_p3 + syntax, [fitsby] + + est clear + + * Determine FITSBY restriction + if ("`fitsby'" == "fitsby") { + local fitsby "FITSBY" + local suffix "_fitsby" + } + else { + local fitsby "" + local suffix "" + } + + * Specify regressions + include "input/lib/stata/define_heterogeneity.do" + + + local label_E "Education" + local label_A "Age" + local label_G "Female" + local label_U "Baseline usage" + local label_R "Restriction index" + local label_L "Addiction index" + + * Run regressions + foreach mod in /*I*/ E A G U R L { + foreach group in 0 1 { + foreach yvar in PD_P3_Usage`FITSBY' { + local baseline PD_P1_Usage`FITSBY' + local if `mod'`group' + + gen_treatment, suffix(_`mod') simple + label var L_`mod' "`label_`mod''" + label var B_`mod' "`label_`mod''" + reg_treatment, yvar(`yvar') suffix(_`mod') indep($STRATA `baseline') if(``if'') simple + est store `yvar'_`if' + } + } + } + + * Plot regressions + coefplot (*1, label("Above median") $COLOR_DARK_GREY) /// + (*0, label("Below median") $COLOR_LIGHT_GREY), /// + keep(L_*) /// + $COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xtitle(" " "Treatment effect (minutes/day)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_limit_usage_by_heterogeneity_P3`suffix'.pdf", replace + + coefplot (*1, label("Above median") $COLOR_DARK_RED) /// + (*0, label("Below median") $COLOR_LIGHT_RED), /// + keep(B_*) /// + $COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xtitle(" " "Treatment effect (minutes/day)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_bonus_usage_by_heterogeneity_P3`suffix'.pdf", replace +end + + + + + +program reshape_self_control_outcomes + * Reshape wide to long + gen S4_Usage_FITSBY = PD_P3_UsageFITSBY + gen S3_Usage_FITSBY = PD_P2_UsageFITSBY + + keep UserID S3_Bonus S2_LimitType Stratifier /// + S1_Income S1_Education S0_Age S0_Gender /// + StratWantRestrictionIndex StratAddictionLifeIndex PD_P1_UsageFITSBY /// + S*_Usage_FITSBY /// + S*_PhoneUseChange_N /// + S*_AddictionIndex_N /// + S*_SMSIndex_N /// + S*_SWBIndex_N /// + S*_LifeBetter_N /// + S*_index_well_N + + local indep UserID S3_Bonus S2_LimitType Stratifier S1_* S0_* Strat* PD_* + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(UserID) j(measure) string + + split measure, p(_) + replace measure = measure2 + "_" + measure3 + "_" + measure4 if measure4 != "" + replace measure = measure2 + "_" + measure3 if measure4 == "" + rename measure1 survey + drop measure2 measure3 measure4 + + * Reshape long to wide + reshape wide outcome, i(UserID survey) j(measure) string + rename outcome* * + + * Recode data + encode survey, gen(S) + + * Label data + label var PhoneUseChange "Ideal use change" + label var AddictionIndex "Addiction scale x (-1)" + label var SMSIndex "SMS addiction scale x (-1)" + label var LifeBetter "Phone makes life better" + label var SWBIndex "Subjective well-being" + label var index_well "Survey index" +end + +program reg_iv_stacked_by_person + est clear + + * Preserve data + preserve + + * Reshape data + reshape_self_control_outcomes + + * Specify regression + local yvarset /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + include "input/lib/stata/define_heterogeneity.do" + + * Run regressions + foreach if in /*I0 I1*/ E0 E1 A0 A1 G0 G1 U0 U1 R0 R1 L0 L1 { + foreach yvar in `yvarset' { + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, suffix(_`yvar') simple + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Run regression + gen_usage_stacked, yvar(`yvar') suffix(_`yvar') var(`yvar') + reg_usage_stacked, yvar(`yvar') suffix(_`yvar') indep(`indep') if(``if'') + est store U_`yvar'_`if' + } + } + + * Plot regressions + foreach mod in /*I*/ E A G U R L { + local coef_plot0 /// + label("`label_`mod'0'") /// + mcolor(edkblue*0.7) ciopts(recast(rcap) lcolor(edkblue*0.7)) + + local coef_plot1 /// + label("`label_`mod'1'") /// + mcolor(edkblue*1.3) ciopts(recast(rcap) lcolor(edkblue*1.3)) + + coefplot (U_*_`mod'1, `coef_plot1') /// + (U_*_`mod'0, `coef_plot0'), /// + keep(U_*) /// + $COEFPLOT_HORIZONTAL_HTE_SETTINGS /// + xtitle(" " "Treatment effect" "(standard deviations per hour/day of use)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_iv_self_control_by_`suffix_`mod''.pdf", replace + } + + * Restore data + restore +end + +program plot_wtp_motivation + * Preserve data + preserve + + * Specify groups + include "input/lib/stata/define_heterogeneity.do" + + foreach mod in I E A G U R L { + foreach group in 0 1 { + gen Motivation_`mod'_`group' = S2_Motivation ``mod'`group'' + } + } + + * Reshape data + keep UserID Motivation_* + reshape long Motivation, i(UserID) j(measure) string + split measure, p("_") + drop measure measure1 + rename measure2 measure + rename measure3 group + + * Recode data + encode measure, generate(measure_encode) + encode group, generate(group_encode) + + recode measure_encode /// + (2 = 1 "Education") /// + (1 = 2 "Age") /// + (3 = 3 "Female") /// + (7 = 4 "Baseline usage") /// + (5 = 5 "Restriction index") /// + (6 = 6 "Addiction index") /// + (4 = 7 "Income less than $50,000"), /// + gen(measure_recode) + + recode group_encode /// + (1 = 2 "Below median") /// + (2 = 1 "Above median"), /// + gen(group_recode) + + * Plot data (app categories together) + cispike Motivation if measure_recode != 7, /// + over1(group_recode) over2(measure_recode) /// + horizontal reverse /// + spikecolor(maroon gray) /// + cicolor(maroon gray) /// + graphopts($CISPIKE_HORIZONTAL_GRAPHOPTS /// + xtitle(" " "Behavior change premium") /// + $SMALL_LABELS) + + graph export "output/cispike_motivation_by_group.pdf", replace + + * Restore data + restore +end + +program plot_limit_wtp + * Preserve data + preserve + + * Specify groups + include "input/lib/stata/define_heterogeneity.do" + + foreach mod in I E A G U R L { + foreach group in 0 1 { + gen WTP_`mod'_`group' = S3_MPLLimit ``mod'`group'' + } + } + + * Reshape data + keep UserID WTP_* + reshape long WTP, i(UserID) j(measure) string + split measure, p("_") + drop measure measure1 + rename measure2 measure + rename measure3 group + + * Recode data + encode measure, generate(measure_encode) + encode group, generate(group_encode) + + recode measure_encode /// + (2 = 1 "Education") /// + (1 = 2 "Age") /// + (3 = 3 "Female") /// + (7 = 4 "Baseline usage") /// + (5 = 5 "Restriction index") /// + (6 = 6 "Addiction index") /// + (4 = 7 "Income less than $50,000"), /// + gen(measure_recode) + + recode group_encode /// + (1 = 2 "Below median") /// + (2 = 1 "Above median"), /// + gen(group_recode) + + * Plot data (app categories together) + cispike WTP if measure_recode != 7, /// + over1(group_recode) over2(measure_recode) /// + horizontal reverse /// + spikecolor(maroon gray) /// + cicolor(maroon gray) /// + graphopts($CISPIKE_HORIZONTAL_GRAPHOPTS /// + xtitle(" " "Willingness to pay for limit ($)") /// + $SMALL_LABELS) + + graph export "output/cispike_limit_motivation_by_group.pdf", replace + + * Restore data + restore +end + + +*********** +* Execute * +*********** + +main + diff --git a/17/replication_package/code/analysis/treatment_effects/code/HeterogeneityInstrumental.do b/17/replication_package/code/analysis/treatment_effects/code/HeterogeneityInstrumental.do new file mode 100644 index 0000000000000000000000000000000000000000..e61605646a0486e6358ab976a7d8beca37981524 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/HeterogeneityInstrumental.do @@ -0,0 +1,477 @@ +// Response to commitment, moderated by demand for flexibility + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" + yaml global STRATA = YAML.metadata.strata +end + +program define_plot_settings + global COEFPLOT_HORIZONTAL_SETTINGS /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(cols(1) region(lcolor(white))) /// + xsize(6.5) ysize(6.5) + + global COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(rows(1) region(lcolor(white))) /// + xsize(6.5) ysize(6.5) + + global SMALL_LABELS /// + xlabel(, labsize(small)) /// + xtitle(, size(small)) /// + ylabel(, labsize(small)) /// + ytitle(, size(small)) /// + legend(size(small)) + + global COEF_SMALL_LABELS /// + coeflabels(, labsize(small)) /// + $SMALL_LABELS + + global COEFPLOT_SETTINGS_STD /// + xline(0, lwidth(thin) lcolor(black)) /// + bgcolor(white) graphregion(color(white)) grid(w) /// + legend(rows(1) region(lcolor(white))) /// + xsize(6.5) ysize(4.5) /// + xtitle(" " "Treatment effect (standard deviations per hour/day of use)") + + global COLOR_MAROON /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) + + global COLOR_LIGHT_RED /// + mcolor(maroon*0.7) ciopts(recast(rcap) lcolor(maroon*0.7)) + + global COLOR_DARK_RED /// + mcolor(maroon*1.3) ciopts(recast(rcap) lcolor(maroon*1.3)) + + global COLOR_LIGHT_GREY /// + mcolor(gray*0.8) ciopts(recast(rcap) lcolor(gray*0.8)) + + global COLOR_DARK_GREY /// + mcolor(gray*1.3) ciopts(recast(rcap) lcolor(gray*1.3)) + +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + define_plot_settings + import_data + + reg_survey_heterogeneity + reg_iv_self_control_no_B3 + reg_welfare_itt + reg_welfare_late +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program gen_coefficient + syntax, var(str) suffix(str) label_var(str) + + cap drop C`suffix' + gen C`suffix' = `var' + + local vlabel: variable label `label_var' + label var C`suffix' "`vlabel'" +end + +program reshape_self_control_outcomes_h + * Reshape wide to long + gen S4_Usage_FITSBY = PD_P3_UsageFITSBY + gen S3_Usage_FITSBY = PD_P2_UsageFITSBY + + keep UserID S3_Bonus S2_LimitType Stratifier /// + S1_Income S1_Education S0_Age S0_Gender /// + StratWantRestrictionIndex StratAddictionLifeIndex PD_P1_UsageFITSBY /// + S*_Usage_FITSBY /// + S*_PhoneUseChange_N /// + S*_AddictionIndex_N /// + S*_SMSIndex_N /// + S*_SWBIndex_N /// + S*_LifeBetter_N /// + S*_index_well_N + + local indep UserID S3_Bonus S2_LimitType Stratifier S1_* S0_* Strat* PD_* + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(UserID) j(measure) string + + split measure, p(_) + replace measure = measure2 + "_" + measure3 + "_" + measure4 if measure4 != "" + replace measure = measure2 + "_" + measure3 if measure4 == "" + rename measure1 survey + drop measure2 measure3 measure4 + + * Reshape long to wide + reshape wide outcome, i(UserID survey) j(measure) string + rename outcome* * + + * Recode data + encode survey, gen(S) + + * Label data + label var PhoneUseChange "Ideal use change" + label var AddictionIndex "Addiction scale x (-1)" + label var SMSIndex "SMS addiction scale x (-1)" + label var LifeBetter "Phone makes life better" + label var SWBIndex "Subjective well-being" + label var index_well "Survey index" +end + +program reshape_self_control_outcomes + * Reshape wide to long + gen S4_Usage_FITSBY = PD_P3_UsageFITSBY + gen S3_Usage_FITSBY = PD_P2_UsageFITSBY + + keep UserID S3_Bonus S2_LimitType Stratifier /// + S*_Usage_FITSBY /// + S*_PhoneUseChange_N /// + S*_AddictionIndex_N /// + S*_SMSIndex_N /// + S*_SWBIndex_N /// + S*_LifeBetter_N /// + S*_index_well_N + + local indep UserID S3_Bonus S2_LimitType Stratifier S1_* + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(`indep') j(measure) string + + split measure, p(_) + replace measure = measure2 + "_" + measure3 + "_" + measure4 if measure4 != "" + replace measure = measure2 + "_" + measure3 if measure4 == "" + rename measure1 survey + drop measure2 measure3 measure4 + + * Reshape long to wide + reshape wide outcome, i(UserID survey) j(measure) string + rename outcome* * + + * Recode data + encode survey, gen(S) + + * Label data + label var PhoneUseChange "Ideal use change" + label var AddictionIndex "Addiction scale x (-1)" + label var SMSIndex "SMS addiction scale x (-1)" + label var LifeBetter "Phone makes life better" + label var SWBIndex "Subjective well-being" + label var index_well "Survey index" +end + +program reg_usage_stacked_no_B3 + syntax, yvar(str) [suffix(str) indep(str) if(str)] + + cap drop i_S4 + gen i_S4 = S - 1 + gen B_`yvar'4 = i_S4 * B_`yvar' + + ivregress 2sls `yvar' (U`suffix' = B_`yvar'4 i.S#L_`yvar') `indep' `if', robust +end + +program reg_survey_heterogeneity + syntax + + est clear + + preserve + * Reshape data + reshape_self_control_outcomes_h + + * Specify regression + local yvarset /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + include "input/lib/stata/define_heterogeneity.do" + + + * Run regressions + foreach if in R0 R1 L0 L1 U0 U1 { + foreach yvar in `yvarset' { + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, suffix(_`yvar') simple + cap drop B3_`yvar' + cap drop B4_`yvar' + gen B3_`yvar' = B_`yvar' * (S == 1) + gen B4_`yvar' = B_`yvar' * (S == 2) + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Limit + gen_coefficient, var(L_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' C_`yvar' B3_`yvar' B4_`yvar' `indep' ``if'', robust cluster(UserID) + est store L_`yvar'_`if' + + * Bonus + gen_coefficient, var(B4_`yvar') suffix(_`yvar') label_var(`yvar') + reg `yvar' L_`yvar' B3_`yvar' C_`yvar' `indep' ``if'', robust cluster(UserID) + est store B_`yvar'_`if' + } + } + + * Plot regressions + foreach mod in R L U { + local coef_plot0 /// + label("`label_`mod'0'") /// + mcolor(gray) ciopts(recast(rcap) lcolor(gray)) + + local coef_plot1 /// + label("`label_`mod'1'") /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) + + coefplot (L_*_`mod'1, `coef_plot1') /// + (L_*_`mod'0, `coef_plot0'), /// + keep(C_*) /// + $COEFPLOT_HORIZONTAL_SETTINGS /// + xtitle(" " "Treatment effect" "(standard deviations)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_limit_itt_by_`suffix_`mod''.pdf", replace + + coefplot (B_*_`mod'1, `coef_plot1') /// + (B_*_`mod'0, `coef_plot0'), /// + keep(C_*) /// + $COEFPLOT_HORIZONTAL_SETTINGS /// + xtitle(" " "Treatment effect" "(standard deviations)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_bonus_itt_by_`suffix_`mod''.pdf", replace + } + + * Restore data + restore + +end + +program reg_iv_self_control_no_B3 + est clear + + * Preserve data + preserve + + * Reshape data + reshape_self_control_outcomes + + * Specify regression + local yvarset /// + PhoneUseChange_N /// + AddictionIndex_N /// + SMSIndex_N /// + LifeBetter_N /// + SWBIndex_N /// + index_well_N + + * Run regressions + foreach yvar in `yvarset' { + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, suffix(_`yvar') simple + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Run regression + gen_usage_stacked, yvar(`yvar') suffix(_`yvar') var(`yvar') + reg_usage_stacked_no_B3, yvar(`yvar') suffix(_`yvar') indep(`indep') + est store U_`yvar' + } + + * Plot regressions + coefplot (U_*, $COLOR_MAROON), /// + keep(U_*) /// + $COEFPLOT_SETTINGS_STD /// + legend(off) + + graph export "output/coef_iv_self_control_no_B3.pdf", replace + + * Restore data + restore +end + +program reg_welfare_itt + est clear + + preserve + * Reshape data + reshape_self_control_outcomes_h + + * Specify regression + local yvar index_well_N + + include "input/lib/stata/define_heterogeneity.do" + + local label_E "Education" + local label_A "Age" + local label_G "Female" + local label_U "Baseline usage" + local label_R "Restriction index" + local label_L "Addiction index" + + gen MG_Indicator = 0 if S0_Gender == 1 + replace MG_Indicator = 1 if S0_Gender == 2 + + local baseline = "S1_`yvar'" + + * Treatment indicators + gen_treatment, simple + cap drop B3_`yvar' + cap drop B4_`yvar' + gen B3 = B * (S == 1) + gen B4 = B * (S == 2) + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Run regressions + foreach group in E A G U R L { + foreach s in 0 1 { + + local if "if M`group'_Indicator == `s'" + * Limit + cap drop C_`group' + gen C_`group' = L + label var C_`group' "`label_`group''" + reg `yvar' C_`group' B3 B4 `indep' ``if'', robust cluster(UserID) + est store L_`group'`s' + + * Bonus + cap drop C_`group' + gen C_`group' = B4 + label var C_`group' "`label_`group''" + reg `yvar' L B3 C_`group' `indep' ``if'', robust cluster(UserID) + est store B_`group'`s' + } + } + + * Plot regressions + + local coef_plot0 /// + label("Below median") + + local coef_plot1 /// + label("Above median") + + coefplot (L*1, `coef_plot1' $COLOR_DARK_GREY) /// + (L*0, `coef_plot0' $COLOR_LIGHT_GREY), /// + keep(C_*) /// + $COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xtitle(" " "Treatment effect" "(standard deviations)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_heterogenous_limit_itt_welfare.pdf", replace + + coefplot (B*1, `coef_plot1' $COLOR_DARK_RED) /// + (B*0, `coef_plot0' $COLOR_LIGHT_RED), /// + keep(C_*) /// + $COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xtitle(" " "Treatment effect" "(standard deviations)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_heterogenous_bonus_itt_welfare.pdf", replace + + * Restore data + restore +end + +program reg_welfare_late + est clear + + preserve + * Reshape data + reshape_self_control_outcomes_h + + * Specify regression + local yvar index_well_N + + include "input/lib/stata/define_heterogeneity.do" + + local label_E "Education" + local label_A "Age" + local label_G "Female" + local label_U "Baseline usage" + local label_R "Restriction index" + local label_L "Addiction index" + + gen MG_Indicator = 0 if S0_Gender == 1 + replace MG_Indicator = 1 if S0_Gender == 2 + + + local baseline = "S1_`yvar'" + + * Specify regression + local indep i.S i.S#$STRATA i.S#c.`baseline' + + * Run regressions + foreach group in E A G U R L { + foreach s in 0 1 { + + local if "if M`group'_Indicator == `s'" + + * Create usage variable (make negative per issue 184 comments) + cap drop U_`group' + gen U_`group' = -1 * Usage_FITSBY + + * Converts usage to hours /day from minutes/day + replace U_`group' = U_`group'/60 + label var U_`group' "`label_`group''" + + * Run regression + gen_treatment, suffix(_`yvar') simple + reg_usage_stacked, yvar(`yvar') suffix(_`group') indep(`indep') if(``if'') + + est store U_`group'`s' + } + } + + * Plot regressions + + local coef_plot0 /// + label("Below median") /// + mcolor(gray) ciopts(recast(rcap) lcolor(gray)) + + local coef_plot1 /// + label("Above median") /// + mcolor(maroon) ciopts(recast(rcap) lcolor(maroon)) + + coefplot (U*1, `coef_plot1') /// + (U*0, `coef_plot0'), /// + keep(U_*) /// + $COEFPLOT_HORIZONTAL_MED_SETTINGS /// + xtitle(" " "Treatment effect" "(standard deviations per hour/day of use)") /// + $COEF_SMALL_LABELS + + graph export "output/coef_heterogenous_late_welfare.pdf", replace + +end + +*********** +* Execute * +*********** + +main \ No newline at end of file diff --git a/17/replication_package/code/analysis/treatment_effects/code/ModelHeterogeneity.R b/17/replication_package/code/analysis/treatment_effects/code/ModelHeterogeneity.R new file mode 100644 index 0000000000000000000000000000000000000000..5bfab692ff0cc4f5856f0740662d4a4ba07beaa4 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/ModelHeterogeneity.R @@ -0,0 +1,1406 @@ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Setup +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +# Import plotting functions and constants from lib file +source('input/lib/r/ModelFunctions.R') +p_B <- (hourly_rate / num_days) / 60 +F_B <- (hourly_rate * max_hours) / num_days +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Helper Functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Adds what decile (or alternatively different step) the variable `x` is in +add_deciles <- function(x, step=0.1){ +decile <- cut(x, + breaks=quantile(x, + probs=seq(0,1,by=step), + na.rm = TRUE), + include.lowest=TRUE, + labels=FALSE) +return(decile) +} + +# Regresses for tau in bins by decile and plots +plot_tau <- function(df, tau_data, decile_name, variable_name, xlabel, eq='usage ~ PD_P1_UsageFITSBY + B + S', filename){ + + tau_data$decile <- tau_data[[decile_name]] + df$decile <- df[[decile_name]] + df$amount_var <- df[[variable_name]] + + taus <- c() + deciles <- sort(unique(tau_data$decile)) + + formula <- eq + for (dec_idx in deciles){ + var <- paste('L', dec_idx, sep="") + tau_data[[var]] <- ifelse(is.na(tau_data$decile), + 0, + ifelse(tau_data$decile == dec_idx, + tau_data$L, + 0)) + formula <- paste(paste(formula, '+'), var) + } + fit <- lm(data = tau_data, + formula = formula) + + print(formula) + + for (dec_idx in deciles){ + var <- paste('L', dec_idx, sep="") + taus <- c(taus, as.numeric(fit$coefficients[var])) + } + + decile_amnts <- df %>% + group_by(decile) %>% + summarize(amount = mean(amount_var), .groups = "drop") + + plot_tau <- data.frame(taus) + plot_tau$decile <- deciles + + plot_tau %<>% left_join(decile_amnts, + by ="decile", + how="left") + + print(plot_tau) + + a <- ggplot(plot_tau, aes(x=amount, y=taus)) + + geom_point(color=maroon) + + theme_classic() + + geom_smooth(method = "lm", + formula = "y ~ x", + se = FALSE, + color="black", + size=0.6) + + labs(x = xlabel, + y = "Tau L") + + ggsave(sprintf('output/%s.pdf', filename), plot=a, width=6.5, height=4.5, units="in") +} + +# Binscatter by zkashner (more or less) +plot_value <- function(df, decile_name, variable_name, variable_amount, xlabel, ylabel, filename){ + + df$decile <- df[[decile_name]] + df$value_var <- df[[variable_name]] + df$amount_var <- df[[variable_amount]] + + values <- c() + deciles <- unique(df$decile) + + for (dec_idx in deciles){ + subset <- df$decile == dec_idx + value <- mean(df[subset,]$value_var, na.rm = T) / num_days + values <- c(values, value) + } + + decile_amnts <- df %>% + group_by(decile) %>% + summarize(amount = mean(amount_var), .groups = "drop") + + plot_value <- data.frame(values) + plot_value$decile <- deciles + + plot_value %<>% merge(decile_amnts, + by ="decile", + how="left") + + print(plot_value) + + a <- ggplot(plot_value, aes(x=amount, y=values)) + + geom_point(color=maroon) + + theme_classic() + + geom_smooth(method = "lm", + formula = "y ~ x", + se = FALSE, + color="black", + size=0.6) + + labs(x = xlabel, + y = ylabel) + + ggsave(sprintf('output/%s.pdf', filename), plot=a, width=6.5, height=4.5, units="in") +} + +reshape_tau_data <- function(df){ + tau_data <- df %>% + select( + UserID, + w, + L, + B, + S, + addiction_decile, + restriction_decile, + tightness_decile, + PD_P1_UsageFITSBY, + PD_P2_UsageFITSBY, + PD_P3_UsageFITSBY, + PD_P4_UsageFITSBY, + PD_P5_UsageFITSBY + ) + + tau_data %<>% + gather( + key = 'period', + value = 'usage', + -UserID, + -w, + -L, + -B, + -S, + -PD_P1_UsageFITSBY, + -addiction_decile, + -restriction_decile, + -tightness_decile + ) + + return(tau_data) +} + +reshape_tightness <- function(df){ + + pt1_usage <- df %>% + select( + UserID, + paste('PD_DailyUsage_', 1:10, sep="")) %>% + gather( + key = 'period', + value = 'usage', + -UserID, + ) %>% + group_by(UserID) %>% + summarize(PD_P1_PT1_UsageFITSBY = mean(usage, na.rm=TRUE), .groups = "drop") + + pt2_usage <- df %>% + select( + UserID, + paste('PD_DailyUsage_', 11:20, sep="")) %>% + gather( + key = 'period', + value = 'usage', + -UserID, + ) %>% + group_by(UserID) %>% + summarize(PD_P1_PT2_UsageFITSBY = mean(usage, na.rm=TRUE), .groups = "drop") + + tightness_df <- df %>% + select( + UserID, + w, L, B, S, + tightness_decile) %>% + merge(pt1_usage, how="left", on="UserID") %>% + merge(pt2_usage, how="left", on="UserID") + + return(tightness_df) +} + +reshape_mispredict <- function(df){ +mpd_df <- df %>% + mutate(Mispredict_P2_S2 = PD_P2_UsageFITSBY - S2_PredictUseNext_1_W) %>% + mutate(Mispredict_P3_S2 = PD_P3_UsageFITSBY - S2_PredictUseNext_2_W) %>% + mutate(Mispredict_P4_S2 = PD_P4_UsageFITSBY - S2_PredictUseNext_3_W) %>% + mutate(Mispredict_S2 = (Mispredict_P2_S2 + Mispredict_P3_S2 + Mispredict_P4_S2)/3) %>% + mutate(mispredict_decile = add_deciles(Mispredict_P2_S2)) %>% + mutate(Mispredict_P3_S3 = PD_P3_UsageFITSBY - S3_PredictUseNext_1_W) %>% + mutate(Mispredict_P4_S3 = PD_P4_UsageFITSBY - S3_PredictUseNext_2_W) %>% + mutate(Mispredict_P5_S3 = PD_P5_UsageFITSBY - S3_PredictUseNext_3_W) %>% + mutate(Mispredict_S3 = (Mispredict_P3_S3 + Mispredict_P4_S3 + Mispredict_P5_S3)/3) %>% + mutate(Mispredict_P4_S4 = PD_P4_UsageFITSBY - S4_PredictUseNext_1_W) %>% + mutate(Mispredict_P5_S4 = PD_P5_UsageFITSBY - S4_PredictUseNext_2_W) %>% + mutate(Mispredict_S4 = (Mispredict_P4_S4 + Mispredict_P5_S4)/2) %>% + mutate(Mispredict_S34 = (3*Mispredict_S3 + 2*Mispredict_S4)/5) %>% #reweight + select(UserID, w, mispredict_decile, Mispredict_P2_S2, Mispredict_S2, Mispredict_S3, Mispredict_S4, Mispredict_S34) + + return(mpd_df) +} + +plot_taus <- function(df, tau_data, tightness_df){ + plot_tau(df, + tau_data, + decile_name = 'addiction_decile', + variable_name = 'StratAddictionLifeIndex', + xlabel = "Addiction Index", + filename = "binscatter_heterogeneity_tau_addiction") + + plot_tau(df, + tau_data, + decile_name = 'restriction_decile', + variable_name = 'StratWantRestrictionIndex', + xlabel = "Restriction Index", + filename = "binscatter_heterogeneity_tau_restriction") + + plot_tau(df, + tau_data, + decile_name = 'tightness_decile', + variable_name = 'PD_P2_LimitTightFITSBY', + xlabel = "Limit Tightness", + filename = "binscatter_heterogeneity_tau_tightness") + + plot_tau(df, + tightness_df, + decile_name = 'tightness_decile', + variable_name = 'PD_P2_LimitTightFITSBY', + xlabel = "Limit Tightness", + eq = 'PD_P1_PT2_UsageFITSBY ~ PD_P1_PT1_UsageFITSBY + B + S', + filename = "binscatter_heterogeneity_tau_tightness_placebo") +} + +plot_valuations <- function(df){ + vars <- c('behavioral_change_premium', 'S3_MPLLimit') + names <- c('Behavioral Change Premium', 'Limit Valuation') + file_exts <- c('behavioral_change_premium', 'v_L') + + for (i in 1:2){ + var_name <- vars[i] + ylabel <- names[i] + file_ext <- file_exts[i] + + plot_value(df, + decile_name = "addiction_decile", + variable_name = var_name, + variable_amount = "StratAddictionLifeIndex", + xlabel = "Addiction Index", + ylabel = ylabel, + filename = sprintf("binscatter_heterogeneity_%s_addiction", file_ext)) + + plot_value(df, + decile_name = "restriction_decile", + variable_name = var_name, + variable_amount = "StratWantRestrictionIndex", + xlabel = "Restriction Index", + ylabel = ylabel, + filename = sprintf("binscatter_heterogeneity_%s_restriction", file_ext)) + + plot_value(df, + decile_name = "tightness_decile", + variable_name = var_name, + variable_amount = "PD_P2_LimitTightFITSBY", + xlabel = "Limit Tightness", + ylabel = ylabel, + filename = sprintf("binscatter_heterogeneity_%s_tightness", file_ext)) + } +} + +plot_mispredict <- function(mpd_df){ + plot_value(mpd_df, + decile_name = "mispredict_decile", + variable_name = "Mispredict_S34", + variable_amount = "Mispredict_P2_S2", + xlabel = "Survey 2 Misprediction (minutes/day)", + ylabel = "Surveys 3 and 4 Misprediction (minutes/day)", + filename = "binscatter_heterogeneity_misprediction") +} + +find_tau_spec <- function(df){ + + days_beg <- 1:10 + days_end <- 11:20 + + tau_data <- df %>% + mutate(tightness=ifelse(L,PD_P2_LimitTightFITSBY, 0)) %>% + mutate(PD_P1beg_Usage_FITSBY = + rowSums(.[paste0('PD_DailyUsageFITSBY_',days_beg)], na.rm=TRUE)/length(days_beg), + PD_P1end_Usage_FITSBY = + rowSums(.[paste0('PD_DailyUsageFITSBY_',days_end)], na.rm=TRUE)/length(days_end)) %>% + select( + UserID, + w, L, B, S, + PD_P1_UsageFITSBY, + PD_P1beg_Usage_FITSBY, + PD_P1end_Usage_FITSBY, + PD_P2_UsageFITSBY, + PD_P3_UsageFITSBY, + PD_P4_UsageFITSBY, + PD_P5_UsageFITSBY, + PD_P2_LimitTightFITSBY, + tightness + ) + + +fit_1 <-lm('PD_P1end_Usage_FITSBY ~ B + L + tightness + PD_P1beg_Usage_FITSBY + S', + data= tau_data, weights = w) + +cluster_se1 <- as.vector(summary(fit_1,cluster = c("UserID"))$coefficients[,"Std. Error"]) + +# the last command prints the stargazer output (in this case as text) + +fit_2 <- lm('PD_P2_UsageFITSBY ~ B + L + tightness + PD_P1_UsageFITSBY+ S', + data= tau_data, weights = w) + +cluster_se2 <- as.vector(summary(fit_2,cluster = c("UserID"))$coefficients[,"Std. Error"]) + + +fit_3 <- lm('PD_P3_UsageFITSBY ~ B + L + tightness + PD_P1_UsageFITSBY + S', + data=tau_data,weights = w) + +cluster_se3 <- as.vector(summary(fit_3,cluster = c("UserID"))$coefficients[,"Std. Error"]) + + +stargazer(fit_1, fit_2, fit_3, + omit.stat = c("adj.rsq","f","ser"), + se = list(cluster_se1, cluster_se2, cluster_se3), + covariate.labels = c("Bonus treatment", "Limit treatment", + "Limit treatment $\\times$ period 2 limit tightness", + "1st half of period 1 FITSBY use", "Period 1 FITSBY use"), + align = TRUE, + dep.var.labels.include = FALSE, + column.labels = c('\\shortstack{2nd half of period 1 \\\\ FITSBY use}', + '\\shortstack{Period 2 \\\\ FITSBY use}', + '\\shortstack{Period 3 \\\\ FITSBY use}'), + title = "", + omit = c("Intercept", "S1", "S2", "S3", "S4", + "S5", "S6", "S7", "S8", "Constant"), + type = "latex", + omit.table.layout = "n", + float = FALSE, + dep.var.caption = "", + star.cutoffs = NA, + out = "output/heterogeneity_reg.tex" + ) + + return() +} + + + + +plot_weekly_effects <- function(df, filename1, filename2){ + get_df <- function(df){ + bonus_coefs <- c() + limit_coefs <- c() + bonus_lower <- c() + bonus_upper <- c() + limit_upper<- c() + limit_lower<- c() + + for (t in 4:15){ + dep_var <- sprintf('PD_WeeklyUsageFITSBY_%s', t) + eq <- paste0(dep_var, '~ PD_WeeklyUsageFITSBY_3 + L + B + S') + + # Run regression + fit <- lm(data = df, + formula = eq, + weights = w) + + + + bonus_coefs <- c(bonus_coefs, summary(fit)$coefficients[4,1]) + limit_coefs <- c(limit_coefs, summary(fit)$coefficients[3,1]) + bonus_lower <- c(bonus_lower, summary(fit, cluster= c("UserID"))$coefficients[4,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[4,2]) + bonus_upper <- c(bonus_upper, summary(fit, cluster= c("UserID"))$coefficients[4,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[4,2]) + limit_upper<- c(limit_upper, summary(fit, cluster= c("UserID"))$coefficients[3,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + limit_lower<- c(limit_lower, summary(fit, cluster= c("UserID"))$coefficients[3,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + + } + + weeklydataframe <- as.data.frame(cbind(bonus_coefs, limit_coefs, bonus_lower, + bonus_upper, limit_lower, limit_upper )) + + + names(weeklydataframe) <- c("bonus_coefs", "limit_coefs", "bonus_lower", + "bonus_upper", "limit_lower", "limit_upper") + + + return(weeklydataframe) + } + + + df_weekly <- get_df(df) + + x <- c('4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15') + names <- factor(x, levels=x) + + weeklydf <- data.frame(names, df_weekly) + + b <- ggplot(weeklydf, aes(x=names, width=.2)) + + geom_point(aes(y=bonus_coefs), colour=maroon, stat="identity") + + geom_errorbar(aes(ymin=bonus_upper, ymax=bonus_lower), colour=maroon, stat="identity") + + scale_y_continuous(name="Treatment effect (minutes/day)") + + theme_classic() + + #theme(axis.text.x = element_text(angle = 45, hjust = 1)) + + labs(x = "Week of experiment") + + theme(legend.text.align = 0, + legend.key.height = unit(1, "cm"), + legend.position="bottom") + + theme(legend.margin=margin(0,0,0,0), + legend.box.margin=margin(-10,-10,-10,-10)) + + theme(axis.text.x = element_text(colour="black")) + + coord_cartesian(ylim = c(-70, 5)) + + theme(legend.text=element_text(size=11)) + + theme( # remove the vertical grid lines + panel.grid.major.x = element_blank() , + # explicitly set the horizontal lines (or they will disappear too) + panel.grid.major.y = element_line( size=.05, color="grey" ) + ) + + l <- ggplot(weeklydf, aes(x=names, width=.2)) + + geom_point(aes(y=limit_coefs), colour=grey, stat="identity") + + geom_errorbar(aes(ymin=limit_upper, ymax=limit_lower), colour=grey, stat="identity") + + scale_y_continuous(name="Treatment effect (minutes/day)") + + theme_classic() + + #theme(axis.text.x = element_text(angle = 45, hjust = 1)) + + labs(x = "Week of experiment") + + theme(legend.text.align = 0, + legend.key.height = unit(1, "cm"), + legend.position="bottom") + + theme(legend.margin=margin(0,0,0,0), + legend.box.margin=margin(-10,-10,-10,-10)) + + theme(axis.text.x = element_text(colour="black")) + + coord_cartesian(ylim = c(-70, 5)) + + theme(legend.text=element_text(size=11)) + + theme( # remove the vertical grid lines + panel.grid.major.x = element_blank() , + # explicitly set the horizontal lines (or they will disappear too) + panel.grid.major.y = element_line( size=.05, color="grey" ) + ) + + + ggsave(sprintf('output/%s.pdf', filename1), plot=b, width=6.5, height=4.5, units="in") + ggsave(sprintf('output/%s.pdf', filename2), plot=l, width=6.5, height=4.5, units="in") + } + + + + + + +plot_treatment_effects <- function(df, filename1, filename2, filename3){ + period_usage <- c("PD_P2_UsageFITSBY", "PD_P3_UsageFITSBY", "PD_P4_UsageFITSBY", "PD_P5_UsageFITSBY") + + bonus_coefs <- c() + limit_coefs <- c() + bonus_lower <- c() + bonus_upper <- c() + limit_upper<- c() + limit_lower<- c() + + for (period in period_usage){ + dep_var <- period + eq <- paste0(dep_var, '~ PD_P1_UsageFITSBY + L + B + S') + + fit <- lm(data = df, + formula = eq, + weights = w) + + bonus_coefs <- c(bonus_coefs, summary(fit)$coefficients[4,1]) + bonus_lower <- c(bonus_lower, summary(fit, cluster= c("UserID"))$coefficients[4,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[4,2]) + bonus_upper <- c(bonus_upper, summary(fit, cluster= c("UserID"))$coefficients[4,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[4,2]) + + limit_coefs <- c(limit_coefs, summary(fit)$coefficients[3,1]) + limit_lower <- c(limit_lower, summary(fit, cluster= c("UserID"))$coefficients[3,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + limit_upper <- c(limit_upper, summary(fit, cluster= c("UserID"))$coefficients[3,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + + } + + x <- c('Period 2', 'Period 3', 'Period 4', 'Period 5') + names <- factor(x, levels=x) + periodtreatments <- data.frame(names, bonus_coefs, limit_coefs, bonus_lower, + bonus_upper, limit_lower, limit_upper ) + + + + cols <- c("Bonus"=maroon , + "Limit"=grey) + + cols_shape <- c("Bonus"=16 , + "Limit"=15) + + a <- ggplot(periodtreatments, aes(x=names, width=.2)) + + geom_point(aes(y=bonus_coefs, colour="Bonus"), stat="identity", position = position_nudge(x = -.1)) + + geom_point(aes(y=limit_coefs, colour="Limit"), stat="identity", position = position_nudge(x = .1))+ + geom_errorbar(aes(ymin=bonus_upper, ymax=bonus_lower, width=0.05), stat="identity", colour=maroon, position = position_nudge(x = -.1)) + + geom_errorbar(aes(ymin=limit_lower, ymax=limit_upper, width=0.05), stat="identity", colour=grey, position=position_nudge(x = .1)) + + scale_y_continuous(name="Treatment effect (minutes/day)") + + theme_classic() + + #theme(axis.text.x = element_text(angle = 45, hjust = 1)) + + labs(x = "") + + theme(legend.text.align = 0, + legend.key.height = unit(1, "cm"), + legend.position="bottom") + + theme(legend.margin=margin(0,0,0,0), + legend.box.margin=margin(-10,-10,-10,-10)) + + theme(axis.text.x = element_text(colour="black")) + + coord_cartesian(ylim = c(-70, 5)) + + theme(legend.text=element_text(size=11)) + + theme( # remove the vertical grid lines + panel.grid.major.x = element_blank() , + # explicitly set the horizontal lines (or they will disappear too) + panel.grid.major.y = element_line( size=.05, color="grey" ) + )+ + scale_colour_manual(name = "", values=cols, + labels = c("Bonus", "Limit")) + + guides(colour=guide_legend(title.position="top", + title.hjust =0.5)) + + + b <- ggplot(periodtreatments, aes(x=names, width=.2)) + + geom_point(aes(y=bonus_coefs), colour=maroon, stat="identity") + + geom_errorbar(aes(ymin=bonus_upper, ymax=bonus_lower, width=0.05), colour=maroon, stat="identity") + + scale_y_continuous(name="Treatment effect (minutes/day)") + + theme_classic() + + #theme(axis.text.x = element_text(angle = 45, hjust = 1)) + + labs(x = "") + + theme(legend.text.align = 0, + legend.key.height = unit(1, "cm"), + legend.position="bottom") + + theme(legend.margin=margin(0,0,0,0), + legend.box.margin=margin(-10,-10,-10,-10)) + + theme(axis.text.x = element_text(colour="black")) + + coord_cartesian(ylim = c(-70, 5)) + + theme(legend.text=element_text(size=11)) + + theme( # remove the vertical grid lines + panel.grid.major.x = element_blank() , + # explicitly set the horizontal lines (or they will disappear too) + panel.grid.major.y = element_line( size=.05, color="grey" ) + ) + + + + l <- ggplot(periodtreatments, aes(x=names, width=.2)) + + geom_point(aes(y=limit_coefs), colour=grey, stat="identity") + + geom_errorbar(aes(ymin=limit_upper, ymax=limit_lower, width=0.05), colour=grey, stat="identity") + + scale_y_continuous(name="Treatment effect (minutes/day)") + + theme_classic() + + #theme(axis.text.x = element_text(angle = 45, hjust = 1)) + + labs(x = "") + + theme(legend.text.align = 0, + legend.key.height = unit(1, "cm"), + legend.position="bottom") + + theme(legend.margin=margin(0,0,0,0), + legend.box.margin=margin(-10,-10,-10,-10)) + + theme(axis.text.x = element_text(colour="black")) + + coord_cartesian(ylim = c(-70, 5)) + + theme(legend.text=element_text(size=11)) + + theme( # remove the vertical grid lines + panel.grid.major.x = element_blank() , + # explicitly set the horizontal lines (or they will disappear too) + panel.grid.major.y = element_line( size=.05, color="grey" ) + ) + +ggsave(sprintf('output/%s.pdf', filename1), plot=a, width=6.5, height=4.5, units="in") +ggsave(sprintf('output/%s.pdf', filename2), plot=b, width=6.5, height=4.5, units="in") +ggsave(sprintf('output/%s.pdf', filename3), plot=l, width=6.5, height=4.5, units="in") + + +} + +plot_treatment_effects_interaction <- function(df, filename1){ + period_usage <- c("PD_P2_UsageFITSBY", "PD_P3_UsageFITSBY", "PD_P4_UsageFITSBY", "PD_P5_UsageFITSBY") + + bonus_coefs <- c() + limit_coefs <- c() + bonus_lower <- c() + bonus_upper <- c() + limit_upper<- c() + limit_lower<- c() + interaction_coefs <- c() + interaction_lower <- c() + interaction_upper <- c() + + for (period in period_usage){ + dep_var <- period + eq <- paste0(dep_var, '~ PD_P1_UsageFITSBY + L + B + L*B + S') + + fit <- lm(data = df, + formula = eq, + weights = w) + + bonus_coefs <- c(bonus_coefs, summary(fit)$coefficients[4,1]) + bonus_lower <- c(bonus_lower, summary(fit, cluster= c("UserID"))$coefficients[4,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[4,2]) + bonus_upper <- c(bonus_upper, summary(fit, cluster= c("UserID"))$coefficients[4,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[4,2]) + + limit_coefs <- c(limit_coefs, summary(fit)$coefficients[3,1]) + limit_lower <- c(limit_lower, summary(fit, cluster= c("UserID"))$coefficients[3,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + limit_upper <- c(limit_upper, summary(fit, cluster= c("UserID"))$coefficients[3,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + + interaction_coefs <- c(interaction_coefs, summary(fit)$coefficients[12,1]) + interaction_lower <- c(interaction_lower, summary(fit, cluster= c("UserID"))$coefficients[12,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[12,2]) + interaction_upper <- c(interaction_upper, summary(fit, cluster= c("UserID"))$coefficients[12,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[12,2]) + + } + + + +x <- c('Period 2', 'Period 3', 'Period 4', 'Period 5') +names <- factor(x, levels=x) + +periodtreatments <- data.frame(names, bonus_coefs, bonus_lower, bonus_upper, limit_coefs, limit_lower, limit_upper,interaction_coefs, interaction_lower, interaction_upper) + +periodtreatments$bonus <- "Bonus" +periodtreatments$limit <- "Limit" +periodtreatments$BL <- "Limit x Bonus" + + +maroon <- '#94343c' +grey <- '#848484' +skyblue <- '#87CEEB' +black <- '#000000' +deepskyblue <- '#B0C4DE' + + cols <- c("Bonus"=maroon , + "Limit"=grey, + "Limit x Bonus"= deepskyblue) + +cols_shape <- c("Bonus"=15 , + "Limit"=19, + "Limit x Bonus"= 17) + +a <- ggplot(periodtreatments, aes(x=names, width=.2)) + + geom_point(aes(y=bonus_coefs, colour=bonus, shape =bonus), stat="identity", position = position_nudge(x = -.2)) + + geom_point(aes(y=limit_coefs, colour=limit, shape=limit), stat="identity", position = position_nudge(x = 0))+ + geom_point(aes(y=interaction_coefs, colour=BL, shape=BL), stat="identity", position = position_nudge(x = 0.2)) + + geom_errorbar(aes(ymin=bonus_upper, ymax=bonus_lower, width=0.05), stat="identity", colour=maroon, position = position_nudge(x = -.2)) + + geom_errorbar(aes(ymin=limit_lower, ymax=limit_upper, width=0.05), stat="identity", colour=grey, position=position_nudge(x =0)) + + geom_errorbar(aes(ymin=interaction_lower, ymax=interaction_upper, width=0.05), stat="identity", colour=deepskyblue, position=position_nudge(x =0.2)) + + scale_y_continuous(name="Treatment effect (minutes/day)") + + theme_classic() + + #theme(axis.text.x = element_text(angle = 45, hjust = 1)) + + labs(x = "") + + theme(legend.text.align = 0, + legend.key.height = unit(1, "cm"), + legend.position="bottom") + + theme(legend.margin=margin(0,0,0,0), + legend.box.margin=margin(-10,-10,-10,-10)) + + theme(axis.text.x = element_text(colour="black")) + + coord_cartesian(ylim = c(-80, 20)) + + theme(legend.text=element_text(size=11)) + + theme( # remove the vertical grid lines + panel.grid.major.x = element_blank() , + # explicitly set the horizontal lines (or they will disappear too) + panel.grid.major.y = element_line( size=.05, color="grey" ) + )+ + scale_colour_manual(name = "", + values=cols) + + scale_shape_manual(name = "", + values = cols_shape) + +ggsave(sprintf('output/%s.pdf', filename1), plot=a, width=6.5, height=4.5, units="in") + + +} + +get_opt <- function(df) { + # Specify regression + + analysisUser <- read_dta("input/AnalysisUser.dta") + + limit <- analysisUser %>% + filter(AppCode %in% df$UserID) %>% + select(OptedOut) %>% + filter(!is.na(OptedOut)) + + + estimate <- + list(nrow(limit %>% filter(OptedOut==1)) , + signif(nrow(limit %>% filter(OptedOut==1))/ nrow(df %>% filter(L==1))*100, digits=1)) + + names(estimate) <- c('numberpeopleoptedout', 'percentoptedout') + + save_nrow(estimate, filename ="optingout", suffix="") +} + + +get_addiction_treatment_effect <- function(df, filename){ + survey_outcomes <- c("index_well_N", "SWBIndex_N", "LifeBetter_N", "SMSIndex_N", "AddictionIndex_N", "PhoneUseChange_N") + + bonus_coefs <- c() + limit_coefs <- c() + bonus_lower <- c() + bonus_upper <- c() + limit_upper<- c() + limit_lower<- c() + + + df <- df %>% + mutate(S43_PhoneUseChange_N = (S4_PhoneUseChange_N + S3_PhoneUseChange_N)/2, + S43_AddictionIndex_N = (S4_AddictionIndex_N + S3_AddictionIndex_N)/2, + S43_SMSIndex_N = (S4_SMSIndex_N + S3_SMSIndex_N)/2, + S43_LifeBetter_N = (S4_LifeBetter_N + S3_LifeBetter_N)/2, + S43_SWBIndex_N = (S4_SWBIndex_N + S3_SWBIndex_N)/2, + S43_index_well_N = (S4_index_well_N + S3_index_well_N)/2) + + + for (outcome in survey_outcomes){ + dep_var <- sprintf("S4_%s", outcome) + indep_var <- sprintf("S1_%s", outcome) + eq <- paste0(paste0(dep_var, '~ L + B + S + '), indep_var) + + fit <- lm(data = df, + formula = eq, + weights = w) + + bonus_coefs <- c(bonus_coefs, summary(fit)$coefficients[3,1]) + bonus_lower <- c(bonus_lower, summary(fit, cluster= c("UserID"))$coefficients[3,1] -1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + bonus_upper <- c(bonus_upper, summary(fit, cluster= c("UserID"))$coefficients[3,1] +1.96*summary(fit, cluster= c("UserID"))$coefficients[3,2]) + + + dep_var_limit <- sprintf("S43_%s", outcome) + indep_var <- sprintf("S1_%s", outcome) + eq_limit <- paste0(paste0(dep_var_limit, '~ L + B + S + '), indep_var) + + fit_limit <- lm(data = df, + formula = eq_limit, + weights = w) + + limit_coefs <- c(limit_coefs, summary(fit_limit)$coefficients[3,1]) + limit_lower <- c(limit_lower, summary(fit_limit, cluster= c("UserID"))$coefficients[3,1] -1.96*summary(fit_limit, cluster= c("UserID"))$coefficients[3,2]) + limit_upper <- c(limit_upper, summary(fit_limit, cluster= c("UserID"))$coefficients[3,1] +1.96*summary(fit_limit, cluster= c("UserID"))$coefficients[3,2]) + + } + + weeklydataframe <- as.data.frame(cbind(bonus_coefs, limit_coefs, bonus_lower, + bonus_upper, limit_lower, limit_upper )) + + names(weeklydataframe) <- c("bonus_coefs", "limit_coefs", "bonus_lower", + "bonus_upper", "limit_lower", "limit_upper") + + + + x <- c('Survey index', 'Subjective well-being', 'Phone makes life better', 'SMS addiction scale x (-1)', 'Addiction scale x(-1)', 'Ideal use change') + names <- factor(x, levels=x) + + weeklydf <- data.frame(names, weeklydataframe) + + + cols <- c("Bonus"=maroon, + "Limit"=grey) + + a <- ggplot(weeklydf, aes(x=names, width=.2)) + + geom_point(aes(y=bonus_coefs, colour="Bonus"), stat="identity", position = position_nudge(x = -.1)) + + geom_point(aes(y=limit_coefs, colour="Limit"), stat="identity", position = position_nudge(x = .1))+ + geom_errorbar(aes(ymin=bonus_upper, ymax=bonus_lower, width=0.05), stat="identity", colour=maroon, position = position_nudge(x = -.1)) + + geom_errorbar(aes(ymin=limit_lower, ymax=limit_upper, width=0.05), stat="identity", colour=grey, position=position_nudge(x = .1)) + + scale_y_continuous(name="Treatment effect (standard deviation)") + + theme_classic() + + scale_colour_manual(name = "", values=cols, + labels = c("Bonus", "Limit")) + + labs(x = "") + + geom_hline(yintercept=0) + + coord_flip(ylim = c(-0.2,0.6)) + + theme(legend.text=element_text(size=11)) + + theme( # remove the vertical grid lines + panel.grid.major.x = element_blank() , + # explicitly set the horizontal lines (or they will disappear too) + panel.grid.major.y = element_line( size=.09, color="grey" ) + ) + + theme(legend.position="bottom") + + ggsave(sprintf('output/%s.pdf', filename), plot=a, width=6.5, height=4.5, units="in") +} + + +get_addiction_scalar <- function(df){ + addiction <- df %>% + select(contains("Addiction")) + + df_addiction <- df + for (i in 1:16){ + df_addiction <- df_addiction %>% + mutate(!!as.name(paste0("High_S1_Addiction_", i)) := ifelse(!!as.name(paste0("S1_Addiction_",i))>0.5, 1, 0)) %>% + mutate(!!as.name(paste0("High_S3_Addiction_", i)) := ifelse(!!as.name(paste0("S3_Addiction_",i))>0.5, 1, 0)) + } + + df_means <- df_addiction + for (i in 1:16){ + df_means <- df_means %>% + mutate(!!as.name(paste0("Mean_High_S1_Addiction_", i)) := mean(!!as.name(paste0("High_S1_Addiction_",i)), na.rm = T)) %>% + mutate(!!as.name(paste0("Mean_High_S3_Addiction_", i)) := mean(!!as.name(paste0("High_S3_Addiction_",i)), na.rm = T)) + } + + df_S3_addiction <- df_means %>% + select(contains("Mean_High_S3")) %>% + unique() + + + df_S3_addiction$top_seven <- rowMeans(df_S3_addiction[1:7], na.rm=TRUE) + df_S3_addiction$bottom_nine <- rowMeans(df_S3_addiction[8:16], na.rm=TRUE) + + mean_top_seven <- (df_S3_addiction$top_seven)*100 + mean_bottom_nine <- (df_S3_addiction$bottom_nine)*100 + + df_addiction_high <- df_addiction %>% + select(contains("High_S3_Addiction_")) + + df_addiction_high$top_seven_any <- rowSums(df_addiction_high[1:7], na.rm=TRUE) + df_addiction_high$bottom_nine_any <- rowSums(df_addiction_high[8:16], na.rm=TRUE) + + df_addiction_high <-df_addiction_high %>% + mutate(top_seven_any_indicator = ifelse(top_seven_any>0, 1, 0), + bottom_nine_any_indicator = ifelse(bottom_nine_any>0, 1, 0)) + + mean_top_seven_any <- mean(df_addiction_high$top_seven_any_indicator, na.rm=TRUE)*100 + mean_bottom_nine_any <- mean(df_addiction_high$bottom_nine_any_indicator, na.rm=TRUE)*100 + + mean_top_seven <- signif(mean_top_seven, digits=2) + mean_top_seven_any <- signif(mean_top_seven_any, digits=2) + mean_bottom_nine <- signif(mean_bottom_nine, digits=2) + mean_bottom_nine_any <- signif(mean_bottom_nine_any, digits=2) + + limit_tightness_df <- df %>% + filter(L==1) %>% + select(contains("PD_P5432_LimitTight")) + + limit_tightness_df_nomissing <- df %>% + filter(L==1) %>% + filter(PD_P5432_LimitTight>0) + + limit_tightness_df_pfive <- df %>% + filter(L==1) %>% + select(contains("PD_P5_LimitTight")) + + limit_tightness_df_nomissing_pfive <- df %>% + filter(L==1) %>% + filter(PD_P5_LimitTight>0) + + percent_positive_tightness <- nrow(limit_tightness_df_nomissing) / nrow(limit_tightness_df) + percent_positive_tightness_pfive <- nrow(limit_tightness_df_nomissing_pfive) / nrow(limit_tightness_df_pfive) + + average_limit_tightness <- mean(limit_tightness_df$PD_P5432_LimitTight, na.rm=TRUE) + average_limit_tightness_pfive <- mean(limit_tightness_df_pfive$PD_P5_LimitTight, na.rm=TRUE) + + percentpositivetightness <- signif(percent_positive_tightness, digits=2)*100 + percentpositivetightnesspfive <- signif(percent_positive_tightness_pfive, digits=2)*100 + + averagelimittightness <- signif(average_limit_tightness, digits=2) + averagelimittightnesspfive <- signif(average_limit_tightness_pfive, digits=2) + + + limit_df <- df %>% + filter(L==1) %>% + select(PD_P5432_LimitTight_Facebook, PD_P5432_LimitTight_Browser, PD_P5432_LimitTight_YouTube, PD_P5432_LimitTight_Instagram) + + + limit_df[is.na(limit_df)] <- 0 + + mean_fb <- mean(limit_df$PD_P5432_LimitTight_Facebook) + mean_browser <- mean(limit_df$PD_P5432_LimitTight_Browser) + mean_youtube <- mean(limit_df$PD_P5432_LimitTight_YouTube) + mean_insta <- mean(limit_df$PD_P5432_LimitTight_Instagram) + + mean_insta_nice <- signif(mean_insta, digits=1) + mean_youtube_nice <- signif(mean_youtube, digits=1) + mean_browser_nice <- signif(mean_browser, digits=1) + mean_fb_nice <- signif(mean_fb, digits=1) + + mpl_df <- df %>% + filter(B==1) %>% + select(S2_PredictUseInitial, S2_PredictUseBonus) + + mean_initial_use <- mean(mpl_df$S2_PredictUseInitial, na.rm=TRUE)/60 + mean_use_bonus <- mean(mpl_df$S2_PredictUseBonus, na.rm=TRUE)/60 + + + df <- df %>% + mutate(F_B_uncensored = 50*PD_P1_UsageFITSBY/20) %>% + mutate(F_B_min = ifelse(F_B_uncensored<150, F_B_uncensored, 150)) %>% + mutate(F_B = F_B_min/num_days) + + FB <- mean(df$F_B, na.rm = T) + + num_days <- 20 + hourly_rate <- 50 + max_hours <- 3 + p_B <- (hourly_rate / num_days) + abcd <- p_B*0.5*(mean_use_bonus+mean_initial_use) + MPL <- FB - abcd + + MPLearningsmean <- MPL*20 + + MPLvalued <- signif(mean(df$S2_MPL, na.rm=T), digits=2) + MPLearningsnice <- signif(MPLearningsmean, digits=2) + MPLpremiumnice <- MPLvalued - MPLearningsnice + + p <-paste0(p_B, "0") + meanpredictuse <- signif(mean_initial_use, digits=2) + meanpredictbonus <- signif(mean_use_bonus, digits=2) + abcd <- signif(abcd, digits=3) + MPL <- signif(MPL, digits=3) + vB <- MPLvalued/20 + + behaviourpremium <- vB - MPL + + fit_3 <- lm(data=df, PD_P5432_Usage_Other ~ PD_P1_Usage_Other + L + B + S) + limitotherfitsby <- fit_3$coefficients[['L']] + limitotherfitsbynice <- signif(limitotherfitsby, digits=2) + + + estimate <- + list(mean_top_seven, mean_top_seven_any, mean_bottom_nine, mean_bottom_nine_any, + percentpositivetightness, averagelimittightness, percentpositivetightnesspfive, averagelimittightnesspfive, mean_insta_nice, mean_youtube_nice, mean_browser_nice, + mean_fb_nice, MPLvalued, MPLearningsnice, MPLpremiumnice, + p,meanpredictuse,meanpredictbonus, abcd, MPL, behaviourpremium, limitotherfitsbynice) + names(estimate) <- c('meantopsevenaddiction', 'meantopsevenanyaddiction', 'meanbottomnineaddiction', + 'meanbottomnineanyaddiction', 'percentpositivetightness', 'averagelimittightness', 'percentpositivetightnesspfive', 'averagelimittightnesspfive', 'instalimittight', 'youtubelimittight', + 'browserlimittight', 'fblimittight', + 'MPLvalued', 'MPLearningsnice', 'MPLpremiumnice','p','meanpredictuse','meanpredictbonus', 'abcd', 'MPL', 'behaviourpremium', 'limitotherfitsbynice') + + save_nrow(estimate, filename ="addiction_scalars", suffix="") + + + +} + +get_swb_effect_exported_limit <- function(df){ + df <- df %>% + mutate( S43_PhoneUseChange_N = (S4_PhoneUseChange_N + S3_PhoneUseChange_N)/2, + S43_AddictionIndex_N = (S4_AddictionIndex_N + S3_AddictionIndex_N)/2, + S43_SMSIndex_N = (S4_SMSIndex_N + S3_SMSIndex_N)/2, + S43_LifeBetter_N = (S4_LifeBetter_N + S3_LifeBetter_N)/2, + S43_SWBIndex_N = (S4_SWBIndex_N + S3_SWBIndex_N)/2, + S43_index_well_N = (S4_index_well_N + S3_index_well_N)/2 , + S43_PhoneUseChange = (S4_PhoneUseChange + S3_PhoneUseChange)/2, + S43_AddictionIndex = (S4_AddictionIndex + S3_AddictionIndex)/2, + S43_SMSIndex = (S4_SMSIndex + S3_SMSIndex)/2, + S43_LifeBetter = (S4_LifeBetter + S3_LifeBetter)/2, + S43_SWBIndex = (S4_SWBIndex + S3_SWBIndex)/2, + S43_index_well= (S4_index_well + S3_index_well)/2) + + + fit<- lm_robust(data=df, formula = S43_PhoneUseChange_N ~ S1_PhoneUseChange_N + B+ L+ S, cluster=UserID ) + + estimate <- list (fit$coefficients[['L']]) + names(estimate) <- c('limitidealcoefn') + se <- list (summary(fit)$coefficients[4,2]) + names(se) <- c('limitidealsen') + pval <- list (summary(fit)$coefficients[4,4]) + names(pval) <- c('pvallimitideal') + p_value_list <- fit[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvallimit <- list(p_adj[4]) + names(pvallimit) <- 'qadjlimitphonechange' + + + fit1 <- lm_robust(data=df, formula = S43_PhoneUseChange ~ S1_PhoneUseChange + B+ L+ S, cluster=UserID ) + estimate1 <- list (fit1$coefficients[['L']]) + names(estimate1) <- c('limitidealcoef') + se1 <- list (summary(fit1)$coefficients[4,2]) + names(se1) <- c('limitidealse') + + fit2 <- lm_robust(data=df, formula = S43_AddictionIndex_N ~ S1_AddictionIndex_N + B+ L+ S, cluster=UserID ) + estimate2 <- list (fit2$coefficients[['L']]) + names(estimate2) <- c('limitaddictioncoefn') + se2 <- list (summary(fit2)$coefficients[4,2]) + names(se2) <- c('limitaddictionsen') + pval2 <- list (summary(fit2)$coefficients[4,4]) + names(pval2) <- c('pvallimitaddict') + p_value_list <- fit2[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvallimit2 <- list(p_adj[4]) + names(pvallimit2) <- 'qadjlimitaddictionindex' + + fit3 <- lm_robust(data=df, formula = S43_AddictionIndex ~ S1_AddictionIndex + B+ L+ S, cluster=UserID ) + estimate3 <- list (fit3$coefficients[['L']]) + names(estimate3) <- c('limitaddictioncoef') + se3 <- list (summary(fit3)$coefficients[4,2]) + names(se3) <- c('limitaddictionse') + + fit4 <- lm_robust(data=df, formula = S43_SMSIndex_N ~ S1_SMSIndex_N + B+ L+ S, cluster=UserID ) + estimate4 <- list (fit4$coefficients[['L']]) + names(estimate4) <- c('limitsmscoefn') + se4 <- list (summary(fit4)$coefficients[4,2]) + names(se4) <- c('limitsmssen') + pval4 <- list (summary(fit4)$coefficients[4,4]) + names(pval4) <- c('pvallimitsmsindex') + p_value_list <- fit4[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvallimit3 <- list(p_adj[4]) + names(pvallimit3) <- 'qadjlimitsmsindex' + + + fit5 <- lm_robust(data=df, formula = S43_SMSIndex ~ S1_SMSIndex + B+ L+ S, cluster=UserID ) + estimate5 <- list (fit5$coefficients[['L']]) + names(estimate5) <- c('limitsmscoef') + se5 <- list (summary(fit5)$coefficients[4,2]) + names(se5) <- c('limitsmsse') + + + fit6 <- lm_robust(data=df, formula = S43_LifeBetter_N ~ S1_LifeBetter_N + B+ L+ S, cluster=UserID ) + estimate6 <- list (fit6$coefficients[['L']]) + names(estimate6) <- c('limitlifebettercoefn') + se6 <- list (summary(fit6)$coefficients[4,2]) + names(se6) <- c('limitlifebettersen') + pval6 <- list (summary(fit6)$coefficients[4,4]) + names(pval6) <- c('pvallimitlifebetter') + p_value_list <- fit6[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + + pvallimit4 <- list(p_adj[4]) + names(pvallimit4) <- 'qadjlimitlifebetter' + + + + fit7 <- lm_robust(data=df, formula = S43_LifeBetter ~ S1_LifeBetter_N + B+ L+ S, cluster=UserID ) + estimate7 <- list (fit7$coefficients[['L']]) + names(estimate7) <- c('limitlifebettercoef') + se7 <- list (summary(fit7)$coefficients[4,2]) + names(se7) <- c('limitlifebetterse') + + + + + fit8 <- lm_robust(data=df, formula = S43_SWBIndex_N ~ S1_SWBIndex_N + B+ L+ S, cluster=UserID ) + estimate8 <- list (fit8$coefficients[['L']]) + names(estimate8) <- c('limitswbindexcoefn') + se8 <- list (summary(fit8)$coefficients[4,2]) + names(se8) <- c('limitswbindexsen') + pval8 <- list (summary(fit8)$coefficients[4,4]) + names(pval8) <- c('pvallimitswbindex') + p_value_list <- fit8[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvallimit5 <- list(p_adj[4]) + names(pvallimit5) <- 'qadjlimitswbindex' + + + fit9 <- lm_robust(data=df, formula = S43_SWBIndex ~ S1_SWBIndex + B+ L+ S, cluster=UserID ) + estimate9 <- list (fit9$coefficients[['L']]) + names(estimate9) <- c('limitswbindexcoef') + se9 <- list (summary(fit9)$coefficients[4,2]) + names(se9) <- c('limitswbindexse') + + + fit10 <- lm_robust(data=df, formula = S43_index_well_N ~ S1_index_well_N + B+ L+ S, cluster=UserID ) + estimate10 <- list (fit10$coefficients[['L']]) + names(estimate10) <- c('limitindexwellcoefn') + se10 <- list (summary(fit10)$coefficients[4,2]) + names(se10) <- c('limitindexwellsen') + pval10 <- list (summary(fit10)$coefficients[4,4]) + names(pval10) <- c('pvallimitindexwell') + p_value_list <- fit10[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvallimit6 <- list(p_adj[4]) + names(pvallimit6) <- 'qadjlimitindexwell' + + fit11 <- lm_robust(data=df, formula = S43_index_well ~ S1_index_well + B+ L+ S, cluster=UserID ) + estimate11 <- list (fit11$coefficients[['L']]) + names(estimate11) <- c('limitindexwellcoef') + se11 <- list (summary(fit11)$coefficients[4,2]) + names(se11) <- c('limitindexwellse') + + limit_effect <- list.merge(estimate, estimate1, estimate2, estimate3, estimate4, estimate5, estimate6, + estimate7, estimate8, estimate9, estimate10, estimate11, se, se1, se2, se3, se4, se5, se6, se7, se8, + se9, se10, se11, pval, pval2, pval4, pval6, pval8, pval10, pvallimit, pvallimit2, pvallimit3, pvallimit4, + pvallimit5, pvallimit6) + + return(limit_effect) + +} + +get_swb_effect_exported_bonus <- function(df){ + fit<- lm_robust(data=df, formula = S4_PhoneUseChange_N ~ S1_PhoneUseChange_N + B+ L+ S, cluster=UserID ) + + estimate <- list (fit$coefficients[['B']]) + names(estimate) <- c('bonusidealcoefn') + se <- list (summary(fit)$coefficients[3,2]) + names(se) <- c('bonusidealsen') + pvaluebonusideal <- list (summary(fit)$coefficients[3,4]) + names(pvaluebonusideal) <- c('pvaluebonusideal') + p_value_list <- fit[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvalbonus <- list(p_adj[3]) + names(pvalbonus) <- 'qadjbonusphonechange' + + fit1 <- lm_robust(data=df, formula = S4_PhoneUseChange ~ S1_PhoneUseChange + B+ L+ S, cluster=UserID ) + estimate1 <- list (fit1$coefficients[['B']]) + names(estimate1) <- c('bonusidealcoef') + se1 <- list (summary(fit1)$coefficients[3,2]) + names(se1) <- c('bonusidealse') + + fit2 <- lm_robust(data=df, formula = S4_AddictionIndex_N ~ S1_AddictionIndex_N + B+ L+ S, cluster=UserID ) + estimate2 <- list (fit2$coefficients[['B']]) + names(estimate2) <- c('bonusaddictioncoefn') + se2 <- list (summary(fit2)$coefficients[3,2]) + names(se2) <- c('bonusaddictionsen') + pvaluebonusaddiction <- list (summary(fit2)$coefficients[3,4]) + names(pvaluebonusaddiction) <- c('pvaluebonusaddiction') + p_value_list <- fit2[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvalbonus2 <- list(p_adj[3]) + names(pvalbonus2) <- 'qadjbonusaddictionindex' + + fit3 <- lm_robust(data=df, formula = S4_AddictionIndex ~ S1_AddictionIndex + B+ L+ S, cluster=UserID ) + estimate3 <- list (fit3$coefficients[['B']]) + names(estimate3) <- c('bonusaddictioncoef') + se3 <- list (summary(fit3)$coefficients[3,2]) + names(se3) <- c('bonusaddictionse') + + + fit4 <- lm_robust(data=df, formula = S4_SMSIndex_N ~ S1_SMSIndex_N + B+ L+ S, cluster=UserID ) + estimate4 <- list (fit4$coefficients[['B']]) + names(estimate4) <- c('bonussmscoefn') + se4 <- list (summary(fit4)$coefficients[3,2]) + names(se4) <- c('bonussmssen') + pvaluebonussms <- list (summary(fit4)$coefficients[3,4]) + names(pvaluebonussms) <- c('pvaluebonussms') + p_value_list <- fit4[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvalbonus3 <- list(p_adj[3]) + names(pvalbonus3) <- 'qadjbonussmsnindex' + + + fit5 <- lm_robust(data=df, formula = S4_SMSIndex ~ S1_SMSIndex + B+ L+ S, cluster=UserID ) + estimate5 <- list (fit5$coefficients[['B']]) + names(estimate5) <- c('bonussmscoef') + se5 <- list (summary(fit5)$coefficients[3,2]) + names(se5) <- c('bonussmsse') + + + fit6 <- lm_robust(data=df, formula = S4_LifeBetter_N ~ S1_LifeBetter_N + B+ L+ S, cluster=UserID ) + estimate6 <- list (fit6$coefficients[['B']]) + names(estimate6) <- c('bonuslifebettercoefn') + se6 <- list (summary(fit6)$coefficients[3,2]) + names(se6) <- c('bonuslifebettersen') + pvaluebonuslifebetter <- list (summary(fit6)$coefficients[3,4]) + names(pvaluebonuslifebetter) <- c('pvaluebonuslifebetter') + p_value_list <- fit6[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvalbonus4 <- list(p_adj[3]) + names(pvalbonus4) <- 'qadjbonuslifebetter' + + fit7 <- lm_robust(data=df, formula = S4_LifeBetter ~ S1_LifeBetter_N + B+ L+ S, cluster=UserID ) + estimate7 <- list (fit7$coefficients[['B']]) + names(estimate7) <- c('bonuslifebettercoef') + se7 <- list (summary(fit7)$coefficients[3,2]) + names(se7) <- c('bonuslifebetterse') + + + + fit8 <- lm_robust(data=df, formula = S4_SWBIndex_N ~ S1_SWBIndex_N + B+ L+ S, cluster=UserID ) + estimate8 <- list (fit8$coefficients[['B']]) + names(estimate8) <- c('bonusswbindexcoefn') + se8 <- list (summary(fit8)$coefficients[3,2]) + names(se8) <- c('bonusswbindexsen') + pvaluebonusswbindex <- list (summary(fit8)$coefficients[3,4]) + names(pvaluebonusswbindex) <- c('pvaluebonusswbindex') + p_value_list <- fit8[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvalbonus5 <- list(p_adj[3]) + names(pvalbonus5) <- 'qadjbonusswbindex' + + + + fit9 <- lm_robust(data=df, formula = S4_SWBIndex ~ S1_SWBIndex + B+ L+ S, cluster=UserID ) + estimate9 <- list (fit9$coefficients[['B']]) + names(estimate9) <- c('bonusswbindexcoef') + se9 <- list (summary(fit9)$coefficients[3,2]) + names(se9) <- c('bonusswbindexse') + + + fit10 <- lm_robust(data=df, formula = S4_index_well_N ~ S1_index_well_N + B+ L+ S, cluster=UserID ) + estimate10 <- list (fit10$coefficients[['B']]) + names(estimate10) <- c('bonusindexwellcoefn') + se10 <- list (summary(fit10)$coefficients[3,2]) + names(se10) <- c('bonusindexwellsen') + pvaluebonusindexwell <- list (summary(fit10)$coefficients[3,4]) + names(pvaluebonusindexwell) <- c('pvaluebonusindexwell') + p_value_list <- fit10[5] + p_value <- p_value_list[['p.value']] + + p_adj <- p.adjust(p_value, method = "BH") + pvalbonus6 <- list(p_adj[3]) + names(pvalbonus6) <- 'qadjbonusindexwell' + + + fit11 <- lm_robust(data=df, formula = S4_index_well ~ S1_index_well + B+ L+ S, cluster=UserID ) + estimate11 <- list (fit11$coefficients[['B']]) + names(estimate11) <- c('bonusindexwellcoef') + se11 <- list (summary(fit11)$coefficients[3,2]) + names(se11) <- c('bonusindexwellse') + + bonus_effect <- list.merge(estimate, estimate1, estimate2, estimate3, estimate4, estimate5, estimate6, estimate7, + estimate8, estimate9, estimate10, estimate11, se, se1, se2, se3, se4, se5, se6, se7, se8, se9, se10, se11, + pvaluebonusideal, pvaluebonusaddiction, pvaluebonussms, pvaluebonuslifebetter, pvaluebonusswbindex, + pvaluebonusindexwell, pvalbonus, pvalbonus2, pvalbonus3, pvalbonus4, pvalbonus5, pvalbonus6) + + return(bonus_effect) + +} + + +plot_histogram_predicted <- function(df, filename){ + df_usage_predict <- df %>% + filter(B==0 & L==0) %>% + select(PD_P2_UsageFITSBY, PD_P3_UsageFITSBY, PD_P4_UsageFITSBY, + S2_PredictUseNext_1, S3_PredictUseNext_1, S4_PredictUseNext_1) %>% + mutate(diff2 =PD_P2_UsageFITSBY -S2_PredictUseNext_1, + diff3 = PD_P3_UsageFITSBY - S3_PredictUseNext_1, + diff4 = PD_P4_UsageFITSBY - S4_PredictUseNext_1) %>% + rowwise() %>% + mutate(diff = mean(c(diff2, diff3, diff4), na.rm=T)) + + a<- ggplot(df_usage_predict, aes(x=diff)) + + geom_histogram(aes(y = stat(count) / sum(count)), colour=maroon, fill=maroon) + + xlim(c(-150, 150)) + + theme_classic() + + labs(x = "Actual minus predicted FITSBY use (minutes/day)", + y="Fraction of control group") + + theme(panel.grid.major.x = element_blank(), + panel.grid.major.y = element_line( size=.1, color="lightsteelblue")) + + ggsave(sprintf('output/%s.pdf', filename), plot=a, width=6.5, height=4.5, units="in") + +} + +plot_individual_temptation_effects <- function(df, param_full, filename){ + + tau_data <- df %>% + select( + UserID, + w, L, B, S, + PD_P1_UsageFITSBY, + PD_P2_UsageFITSBY, + PD_P3_UsageFITSBY, + PD_P4_UsageFITSBY, + PD_P5_UsageFITSBY, + PD_P2_LimitTightFITSBY + ) + + fit_2 <- tau_data %>% + mutate(tightness=ifelse(L,PD_P2_LimitTightFITSBY, 0)) %>% + lm(formula = 'PD_P2_UsageFITSBY ~ PD_P1_UsageFITSBY + L + tightness + B + S', + weights = w) + + const_2 <- fit_2$coefficients['L'] + slope_2 <- fit_2$coefficients['tightness'] + + + fit_3 <- tau_data %>% + mutate(tightness=ifelse(L,PD_P2_LimitTightFITSBY, 0)) %>% + lm(formula = 'PD_P3_UsageFITSBY ~ PD_P1_UsageFITSBY + L + tightness + B + S', + weights = w) + + const_3 <- fit_3$coefficients['L'] + slope_3 <- fit_3$coefficients['tightness'] + + + + df <- df %>% + mutate(tau_tilde_L = const_3 + slope_3*PD_P3_LimitTightFITSBY, + tau_L_2 = const_2 + slope_2 *PD_P2_LimitTightFITSBY) %>% + mutate(x_ss_i_data = PD_P1_UsageFITSBY) + + rho <- param_full[['rho']] + alpha <- param_full[['alpha']] + lambda <- param_full[['lambda']] + delta <- param_full[['delta']] + eta <- param_full[['eta']] + zeta <- param_full[['zeta']] + omega <- param_full[['omega']] + naivete <- param_full[['naivete']] + mispredict <- param_full[['mispredict']] + + df <- df %>% + mutate(num = eta*tau_L_2/omega - (1-alpha)*delta*rho*(((eta-zeta)*tau_tilde_L/omega+zeta*rho*tau_L_2/omega) + (1+lambda)*mispredict*(-eta+(1-alpha)*delta*rho^2*((eta-zeta)*lambda+zeta))), + denom = 1 - (1-alpha)*delta*rho*(1+lambda), + gamma_spec = num/denom, + gamma_tilde_spec = gamma_spec - naivete) + + + df <- df %>% + mutate(intercept_spec = calculate_intercept_spec(x_ss_i_data, param_full, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta)) %>% + mutate(x_ss_spec = calculate_steady_state(param_full, gamma_tilde_spec, gamma_spec, alpha, rho, lambda, mispredict, eta, zeta, intercept_spec), + x_ss_zero_un =calculate_steady_state(param_full, 0, 0, alpha, rho, lambda, 0, eta, zeta, intercept_spec), + x_ss_zero =ifelse(x_ss_zero_un<0, 0, x_ss_zero_un), + delta_x = x_ss_spec - x_ss_zero, + delta_x_zero =ifelse(delta_x<0, 0, delta_x), + delta_x_zero_3300 = ifelse(delta_x_zero>300, 300, delta_x_zero)) + + temptation_effect_below_ten <- nrow(df %>% filter(delta_x_zero_3300<10)) / nrow(df %>% filter(!is.na(delta_x_zero_3300))) + temptationeffectbelowten <- signif(temptation_effect_below_ten, digits=2)*100 + + temptation_effect_above_100 <- nrow(df %>% filter(delta_x_zero_3300>100)) / nrow(df %>% filter(!is.na(delta_x_zero_3300))) + temptationeffectabovehundred <- signif(temptation_effect_above_100, digits=2)*100 + + + estimate <- + list(temptationeffectbelowten, temptationeffectabovehundred) + names(estimate) <- c('temptationeffectbelowten', 'temptationeffectabovehundred') + + save_nrow(estimate, filename ="individual_temptation_scalars", suffix="") + + + a<- ggplot(df, aes(x=delta_x_zero_3300)) + + geom_histogram(aes(y = stat(count) / sum(count)), colour=maroon, fill=maroon) + + xlim(c(0, 300)) + + ylim(c(0,0.11)) + + theme_classic() + + labs(x = "Effect of temptation on FITSBY use (minutes/day)", + y="Fraction of sample") + + theme(panel.grid.major.x = element_blank(), + panel.grid.major.y = element_line( size=.1, color="lightsteelblue")) + + ggsave(sprintf('output/%s.pdf', filename), plot=a, width=6.5, height=4.5, units="in") +} + + + + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Excecute +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +main <- function(){ + + df <- import_data() %>% + mutate(addiction_decile = add_deciles(StratAddictionLifeIndex)) %>% + mutate(restriction_decile = add_deciles(StratWantRestrictionIndex, step=0.125)) %>% + mutate(tightness_decile = add_deciles(PD_P2_LimitTightFITSBY, step=1/6)) %>% + mutate(S2_PredictUseBonus = S2_PredictUseInitial * (1 - (S2_PredictUseBonus / 100))) %>% + mutate(f_tilde_2_B = (S2_PredictUseBonus + S2_PredictUseInitial) / 2) %>% + mutate(behavioral_change_premium = (S2_MPL/num_days) - F_B + (p_B*f_tilde_2_B)) + + param <- param_initial + param_full <- estimate_model(df, param) + plot_individual_temptation_effects(df, param_full, filename="hist_individual_temptation_effects") + + + bonus_effect <-get_swb_effect_exported_bonus(df) + limit_effect <-get_swb_effect_exported_limit(df) + + swb_effects <- list.merge(bonus_effect, limit_effect) + save_tex2(swb_effects, filename="swb_effects") + save_tex_one(swb_effects, filename="swb_effects_onedigit", suffix="one") + + tau_data <- reshape_tau_data(df) + tightness_df <- reshape_tightness(df) + mpd_df <- reshape_mispredict(df) + + plot_taus(df, tau_data, tightness_df) + plot_valuations(df) + plot_mispredict(mpd_df) + print('here') + find_tau_spec(df) + print('past here') + plot_treatment_effects(df, filename1="treatment_effects_periods_limit_bonus", filename2="treatment_effects_periods_bonus", filename3="treatment_effects_periods_limit") + plot_treatment_effects_interaction(df, filename1 = "interaction_treatment_effects") + plot_weekly_effects(df, filename1="treatment_effects_weeks_bonus", filename2 = "treatment_effects_weeks_limit") + #get_opt(df) + get_addiction_scalar(df) + plot_histogram_predicted(df, filename="histogram_predicted_actual_p24") + + df %<>% balance_data(magnitude=3) + plot_treatment_effects(df, filename1="treatment_effects_periods_limit_bonus_balanced", filename2="treatment_effects_periods_bonus_balanced", filename3="treatment_effects_periods_limit_balanced") + get_addiction_treatment_effect(df, filename="coef_usage_self_control_balance") + +} + +main() diff --git a/17/replication_package/code/analysis/treatment_effects/code/SurveyValidation.do b/17/replication_package/code/analysis/treatment_effects/code/SurveyValidation.do new file mode 100644 index 0000000000000000000000000000000000000000..99df79105507a99a7670d8fa0a95f31b86645234 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/code/SurveyValidation.do @@ -0,0 +1,136 @@ +// Description of data + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_settings + global DESCRIPTIVE_TAB /// + collabels(none) nodepvars noobs replace +end + +********************** +* Analysis functions * +********************** + +program main + import_data + define_settings + + correlation_motivation + reg_prediction_reward +end + +program import_data + use "input/final_data_sample.dta", clear +end + +program correlation_motivation + preserve + + generate tightness=0 + replace tightness=PD_P2_LimitTightFITSBY if (S2_LimitType != 0) + + foreach v in S1_PhoneUseChange S1_AddictionIndex S1_SMSIndex S1_LifeBetter{ + replace `v' = -`v' + } + correlate S2_Benchmark S3_MPLLimit tightness S1_InterestInLimits S1_PhoneUseChange S1_AddictionIndex S1_SMSIndex S1_LifeBetter + matrix define correlation = r(C) + + drop * + + svmat correlation + qui ds + foreach i of numlist 1/8 { + replace correlation`i' = . if _n < `i' + } + + dta_to_txt, saving(output/motivation_correlation.txt) title() nonames replace + dta_to_txt, saving(output/motivation_correlation_beamer.txt) title() nonames replace + + restore +end + +program reg_prediction_reward + preserve + + * make a dummy for if high reward + gen PredictRewardHigh = 1 if PredictReward == 5 + replace PredictRewardHigh = 0 if PredictReward == 1 + + + * Reshape data to use predictions from all three surveys + keep UserID PredictRewardHigh S*_PredictUseNext_1 + local indep UserID PredictRewardHigh + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(`indep') j(measure) string + + gen survey = substr(measure, 2, 1) + drop measure + + rename outcome predicted + + * Save to be merged later + tempfile temp + save `temp' + + restore + + + preserve + * Reshape data to use actual predictions from those periods + keep UserID PD_P2_UsageFITSBY PD_P3_UsageFITSBY PD_P4_UsageFITSBY + local indep UserID + + rename_but, varlist(`indep') prefix(outcome) + reshape long outcome, i(`indep') j(measure) string + + gen survey = substr(measure, 5, 1) + drop measure + + rename outcome actual + + * Re-join with the actual predictions + merge 1:1 UserID survey using `temp' + + * Run the regressions in question + reg predicted PredictRewardHigh, robust + est store predicted + + reg actual PredictRewardHigh, robust + est store actual + + gen pred_min_actual = predicted - actual + reg pred_min_actual PredictRewardHigh, robust + est store pred_min_actual + + gen abs_pred_min_actual = abs(predicted - actual) + reg abs_pred_min_actual PredictRewardHigh, robust + est store abs_pred_min_actual + + * Save the regressions as a table + esttab predicted actual pred_min_actual abs_pred_min_actual /// + using "output/high_reward_reg.tex", /// + mtitle("\shortstack{Predicted\\use}" /// + "\shortstack{Actual\\use}" /// + "\shortstack{Predicted -\\actual use}" /// + "\shortstack{Absolute value of\\predicted - actual\\use}") /// + coeflabels(PredictRewardHigh "High prediction reward" /// + _cons "Constant") /// + $DESCRIPTIVE_TAB se nostar nonotes + restore +end + +*********** +* Execute * +*********** + +main diff --git a/17/replication_package/code/analysis/treatment_effects/input.txt b/17/replication_package/code/analysis/treatment_effects/input.txt new file mode 100644 index 0000000000000000000000000000000000000000..7025ab09c1bc41839bba9ce6caa36ff8c51a8ba9 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/input.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:7e99d601b158b198f1db1ad6c634f9c4011573ec45848fe7d4e716fc3e26cac3 +size 914 diff --git a/17/replication_package/code/analysis/treatment_effects/make.py b/17/replication_package/code/analysis/treatment_effects/make.py new file mode 100644 index 0000000000000000000000000000000000000000..9fc0093f4180d5b58a441587b01f4024211e63f0 --- /dev/null +++ b/17/replication_package/code/analysis/treatment_effects/make.py @@ -0,0 +1,75 @@ +################### +### ENVIRONMENT ### +################### +import git +import imp +import os + +### SET DEFAULT PATHS +ROOT = '../..' + +PATHS = { + 'root' : ROOT, + 'lib' : os.path.join(ROOT, 'lib'), + 'config' : os.path.join(ROOT, 'config.yaml'), + 'config_user' : os.path.join(ROOT, 'config_user.yaml'), + 'input_dir' : 'input', + 'external_dir' : 'external', + 'output_dir' : 'output', + 'output_local_dir' : 'output_local', + 'makelog' : 'log/make.log', + 'output_statslog' : 'log/output_stats.log', + 'source_maplog' : 'log/source_map.log', + 'source_statslog' : 'log/source_stats.log', +} + +### LOAD GSLAB MAKE +f, path, desc = imp.find_module('gslab_make', [PATHS['lib']]) +gs = imp.load_module('gslab_make', f, path, desc) + +### LOAD CONFIG USER +PATHS = gs.update_paths(PATHS) +gs.update_executables(PATHS) + +############ +### MAKE ### +############ + +### START MAKE +gs.remove_dir(['input', 'external']) +gs.clear_dir(['output', 'log', 'temp']) +gs.start_makelog(PATHS) + +### GET INPUT FILES +inputs = gs.link_inputs(PATHS, ['input.txt']) +# gs.write_source_logs(PATHS, inputs + externals) +# gs.get_modified_sources(PATHS, inputs + externals) + +### RUN SCRIPTS +""" +Critical +-------- +Many of the Stata analysis scripts recode variables using +the `recode` command. Double-check all `recode` commands +to confirm recoding is correct, especially when reusing +code for a different experiment version. +""" + +gs.run_stata(PATHS, program = 'code/CommitmentResponse.do') +gs.run_stata(PATHS, program = 'code/HabitFormation.do') +gs.run_stata(PATHS, program = 'code/Heterogeneity.do') +gs.run_stata(PATHS, program = 'code/SurveyValidation.do') +gs.run_stata(PATHS, program = 'code/FDRTable.do') +gs.run_stata(PATHS, program = 'code/HeterogeneityInstrumental.do') +gs.run_stata(PATHS, program = 'code/Beliefs.do') + +gs.run_r(PATHS, program = 'code/ModelHeterogeneity.R') + +### LOG OUTPUTS +gs.log_files_in_output(PATHS) + +### CHECK FILE SIZES +# gs.check_module_size(PATHS) + +### END MAKE +gs.end_makelog(PATHS) diff --git a/17/replication_package/code/codebook.xlsx b/17/replication_package/code/codebook.xlsx new file mode 100644 index 0000000000000000000000000000000000000000..e61a2a0be579721f6bd5cdda6848e7983d7c1045 --- /dev/null +++ b/17/replication_package/code/codebook.xlsx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7c741990bd21eb9bab76643df657fff0f7bfc2c8500f64325178d371127a16e +size 63101 diff --git a/17/replication_package/code/config.yaml b/17/replication_package/code/config.yaml new file mode 100644 index 0000000000000000000000000000000000000000..15672f160f4fcb3bc089257b0c6a4ea4fd4f3ce5 --- /dev/null +++ b/17/replication_package/code/config.yaml @@ -0,0 +1,122 @@ + +##################################################################### +# Is git LFS required to run this repository? +# +# This normally remain Yes, as this prevents inadvertently +# committing large data files +##################################################################### +git_lfs_required: Yes + +##################################################################### +# Other required software +##################################################################### +gslab_make_required: Yes + +software_required: + r: No + stata: No + lyx: Yes + matlab: No + latex: No + +##################################################################### +# Maximum allowed file sizes +##################################################################### +max_file_sizes: + file_MB_limit_lfs: 100 # Soft limit on file size (w/ LFS) + total_MB_limit_lfs: 500 # Soft limit on total size (w/ LFS) + file_MB_limit: 0.5 # Soft limit on file size (w/o LFS) + total_MB_limit: 100 # Soft limit on total size (w/o LFS) + +metadata: + payment: + bonus: 50 + fixed_rate: 50 + strata: i.Stratifier + +##################################################################### +# Repository metadata +##################################################################### + +# Experiment Name (could equal, for example, to 'Pilot#', 'Temptation'. If set to 'Scratch', the pipeline will process dummy data). +experiment_name: "Temptation" + +# Survey Dates (Note that the 'Phase{#}Start' surveys are just fillers for the post study phases) +surveys: + Recruitment: + Start: !!timestamp "2020-03-22 12:00:00" + End: !!timestamp "2020-04-10 10:45:00" + Baseline: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-04-13 16:00:00" + Midline: + Start: !!timestamp "2020-05-03 0:00:00" + End: !!timestamp "2020-05-11 23:59:00" + Endline1: + Start: !!timestamp "2020-05-24 08:00:00" + End: !!timestamp "2020-05-31 08:00:00" + Endline2: + Start: !!timestamp "2020-06-14 08:00:00" + End: !!timestamp "2020-06-22 17:00:00" + Phase5Start: + Start: !!timestamp "2020-07-05 08:00:00" + End: !!timestamp "2020-07-05 23:59:00" + Phase6Start: + Start: !!timestamp "2020-07-26 08:00:00" + End: !!timestamp "2020-07-26 23:59:00" + Phase7Start: + Start: !!timestamp "2020-08-16 08:00:00" + End: !!timestamp "2020-08-16 23:59:00" + Phase8Start: + Start: !!timestamp "2020-09-06 08:00:00" + End: !!timestamp "2020-09-06 23:59:00" + Phase9Start: + Start: !!timestamp "2020-09-27 08:00:00" + End: !!timestamp "2020-09-27 23:59:00" + Phase10Start: + Start: !!timestamp "2020-10-18 08:00:00" + End: !!timestamp "2020-10-18 23:59:00" + Phase11Start: + Start: !!timestamp "2020-11-08 08:00:00" + End: !!timestamp "2020-11-08 23:59:00" + Enrollment: + Start: !!timestamp "2020-04-09 9:00:00" + End: !!timestamp "2020-04-11 12:00:00" + WeeklyText: + Start: !!timestamp "2020-03-25 00:00:00" + End: !!timestamp "2020-03-30 23:59:00" + PDBug: + Start: !!timestamp "2020-04-24 18:40:00" + End: !!timestamp "2020-04-28 23:59:00" + TextSurvey1: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey2: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey3: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey4: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey5: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey6: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey7: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey8: + Start: !!timestamp "2020-04-12 08:00:00" + End: !!timestamp "2020-06-17 00:00:00" + TextSurvey9: + Start: !!timestamp "2020-04-12 20:00:00" + End: !!timestamp "2020-06-17 00:00:00" + +# Date Range of Data Used in Study (range of data we pull PD data) +date_range: + first_pull: !!timestamp "2020-03-21 00:00:00" + last_pull: !!timestamp "2020-11-15 00:00:00" \ No newline at end of file diff --git a/17/replication_package/code/config_user.yaml b/17/replication_package/code/config_user.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3454c4ca676d54f73fd2d158163efd63cea7dfcc --- /dev/null +++ b/17/replication_package/code/config_user.yaml @@ -0,0 +1,67 @@ +##################################################################### +# Make a copy of this file called config_user.yaml and place +# it at the root level of the repository +# +# This file holds local settings specific to your computing +# environment. It should not be committed to the repository. +##################################################################### + +##################################################################### +# External dependencies +# +# This section defines resources used by the code that are external +# to the repository. Code should never reference any files external +# to the repository except via these paths. +# +# Each external resource is defined by a key with a value equal +# to the local path to the resource. These +# keys should be short descriptive names that will then be used +# to refer to these resources in code. E.g., "raw_data", +# "my_other_repo", etc. Defaults can optionally be placed in +# brackets after the colon +# +# Replace the paths below with correct local paths on your machine +# +##################################################################### +external: + dropbox: /project #Point to PhoneAddiction Dropbox Root + +##################################################################### +# Local settings +# +# This section defines parameters specific to each user's local +# environment. +# +# Examples include names of executables, usernames, etc. These +# variables should NOT be used to store passwords. +# +# Each parameter is defined by a key with default value. These +# keys should be short descriptive names that will then be used +# to refer to the parameters in code. +# +##################################################################### +local: + + # Executable names + executables: + + python: python + r: Rscript + stata: stata-mp + matlab: matlab + lyx: lyx + latex: latex + + # Data Run + #if true, data/run will start by reading in the latest raw master data file, instead of processing raw phone dashboard data + skip_building: True + + # if true, will process new data in parallel. only relevant if skip_building == False + parallel: False + cores: 4 + + #if true, will use all data in DataTest and ConfidentialTest + test: False + +#if true, stdout will write to data/log/mb_log.log instead of to terminal + log: False diff --git a/17/replication_package/code/data/README.md b/17/replication_package/code/data/README.md new file mode 100644 index 0000000000000000000000000000000000000000..954016a220fb7070f94549730b231da088161b50 --- /dev/null +++ b/17/replication_package/code/data/README.md @@ -0,0 +1,96 @@ +# data +This module contains all code that preps for analysis and produces survey management deliverables (e.g. contact lists). The dataset needed to run this module rely on confidential data, and were thus omitted from this replication archive. + +We detail how, in the presence of the raw confidential data, this module construct the main datasets. + + #### 1. Pipeline overview + We run the whole data pipeline by calling data/make.py, which will call run 3 main sub modules below. Note: + many classes and functions required in this pipeline are located in lib/data_helpers. + + 1. source/build_master + + a. Purpose: a builder object ( in builder.py) will pull all the raw data, detect gaming, clean each individual data files, merge them + on the user level and the user_day_app level. + + b. Input: + i. Raw Survey + ii. Phone Dashboard + + c. Output: + i. master_raw_user.pickle, a raw master file on the user level, that will contain data from surveys and PD data for each phase + ii. master_user_day_app.pickle, a clean master file on the user day app level that will contain use, limit, and snooze activity + + 2. source/clean_master + + a. Purpose: a cleaner object (in cleaner.py) will clean the raw_master_user.pickle, and assign treatments, calculate earnings, + and create outcome variables + + b. Input: raw_master_user.pickle + + c. Output: clean_master_user.pickle + + 3. source/exporters + + a. Purpose: creates contact lists, tango cards, phone dashboard treatment configs, and analysis files ready for stata + + b. Input: master_clean_user.pickle and master_user_day_app.pickle + + c. Output: + i. Contact Lists, Tango Cards, Phone Dashboard, Treatment Configs, and other data with identifiable info will output into /Dropbox/PhoneAddiction/Confidential + ii. pre_analysis_user.csv and pre_analysis_user_app_day.csv in /Dropbox/PhoneAddiction/Data/{experiment_name}/Intermediate + + ## 2. Configurations: + - root/config_user.yaml: configurations that alter how the pipeline is run. Read through those in the yaml file, but to highlight: + 1. skip_building: if true, data/run will start by reading in the latest raw master data file, instead of processing raw phone dashboard data. You should not attempt run the raw PD data unless you're on Sherlock or some HPC + 2. test: if set true, this runs nearly the full pipeline, but with dummy data. Data is saved in DataTest and ConfidentialTest. This is helpful when testing something in the build class + + - root/config.yaml: sets significant experiment dates (survey dates, and date range of PD data pull) + + - root/lib/experiment_specs: contains detailed specs for the data pipeline. Check out the README in that folder for specifics. + + ## 3. Raw Phone Dashboard Data Exports + - All PD Data arrives in the PhonedashboardPort dropbox folder. All these files are processed by functions + in data/source/build_master/builder.py and helper functions in lib/data_helpers + + ## Snooze Events + 1. PD receives usage ping from ForegroundApplication generator. + + 2. If app usage is within X minutes of budget being exhausted: + + 2.a: PD does not block app, but launches warning activity with the package and metadata. + + 2.b: Warning activity throws up warning dialog (event = app-block-warning). + + 2.c: User closes / cancels dialog (event = closed-warning). + + 3. If app usage is past budget AND has been snoozed, and snooze delay has not elapsed: + + 3.a: PD blocks app (returns system to the home screen (event = blocked_app). + + 3.b: PD shows dialog letting user know that delay hasn’t elapsed (event = app-blocked-no-snooze). + + 3.c: User closes/cancels dialog (event = app-blocked-no-snooze-closed). + + 3. If app usage is past budget AND app has not been snoozed: + + 3.a: PD blocks app (returns system to the home screen (event = blocked_app). + + 3.b: PD shows dialog letting user budget is exhausted (event = app-blocked-no-snooze). + + 3.c: If snooze is NOT enabled for user: + + 3.c.1: PD shows dialog that user cannot use app until tomorrow (event = app-blocked-no-snooze). + + 3.c.2: User closes / cancels dialog (event = app-blocked-no-snooze-closed). + + 3.d: If snooze IS enabled for user: + + 3.d.1: PD shows dialog letting user know budget is up (event = app-blocked-can-snooze). + + 3.d.2: If user closes / cancels dialog (event = skipped-snooze). + + 3.d.3: If user decides to snooze, PD shows dialog asking about snooze amount (no event generated). + + 3.d.3.a: User closes / cancels snooze dialog, without setting limit (event = cancelled-snooze). + + 3.d.3.b: User sets snooze amount (event = snoozed-app-limit). diff --git a/17/replication_package/code/data/__init__.py b/17/replication_package/code/data/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/data/external.txt b/17/replication_package/code/data/external.txt new file mode 100644 index 0000000000000000000000000000000000000000..1bbd96c0390f016fe6e0c8401fd9fdb5088e3e6e --- /dev/null +++ b/17/replication_package/code/data/external.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e8534419da84c6a8c9f902d366dd6c964c2ec49c804730cc4ad333a7d7f05a39 +size 1135 diff --git a/17/replication_package/code/data/input.txt b/17/replication_package/code/data/input.txt new file mode 100644 index 0000000000000000000000000000000000000000..8f87c90fab724e28dcd9fabc9e15c1654c6a475b --- /dev/null +++ b/17/replication_package/code/data/input.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:47d958523f79a58631b695d9e13762414d22ce0055ae9ac8b9f6ad63c17026c1 +size 676 diff --git a/17/replication_package/code/data/make.py b/17/replication_package/code/data/make.py new file mode 100644 index 0000000000000000000000000000000000000000..186c93454bde14ed7ec683583705eb2d63175729 --- /dev/null +++ b/17/replication_package/code/data/make.py @@ -0,0 +1,68 @@ +################### +### ENVIRONMENT ### +################### +import git +import imp +import os +import yaml + +### SET DEFAULT PATHS +ROOT = git.Repo('.', search_parent_directories = True).working_tree_dir + +PATHS = { + 'root' : ROOT, + 'lib' : os.path.join(ROOT, 'lib'), + 'config' : os.path.join(ROOT, 'config.yaml'), + 'config_user' : os.path.join(ROOT, 'config_user.yaml'), + 'input_dir' : 'input', + 'external_dir' : 'external', + 'output_dir' : 'output', + 'output_local_dir' : 'output_local', + 'makelog' : 'log/make.log', + 'output_statslog' : 'log/output_stats.log', + 'source_maplog' : 'log/source_map.log', + 'source_statslog' : 'log/source_stats.log' +} + +### ADD EXPERIMENT NAME TO PATH +with open(PATHS['config'], 'r') as stream: + config = yaml.safe_load(stream) + +PATHS["experiment_name"] = config['experiment_name'] + +### LOAD GSLAB MAKE +f, path, desc = imp.find_module('gslab_make', [PATHS['lib']]) +gs = imp.load_module('gslab_make', f, path, desc) + +### LOAD CONFIG USER +PATHS = gs.update_paths(PATHS) +gs.update_executables(PATHS) + +############ +### MAKE ### +############ + +### START MAKE +gs.remove_dir(['input', 'external']) +gs.clear_dir(['output', 'log']) +gs.start_makelog(PATHS) + +### GET INPUT FILES +inputs = gs.link_inputs(PATHS, ['input.txt']) +externals = gs.link_externals(PATHS, ['external.txt']) + +gs.write_source_logs(PATHS, inputs + externals) +gs.get_modified_sources(PATHS, inputs + externals) + +### RUN SCRIPTS +gs.run_python(PATHS, program = 'source/run.py') +gs.run_stata(PATHS, program = 'source/prep_stata.do') + +### LOG OUTPUTS +gs.log_files_in_output(PATHS) + +### CHECK FILE SIZES +gs.check_module_size(PATHS) + +### END MAKE +gs.end_makelog(PATHS) \ No newline at end of file diff --git a/17/replication_package/code/data/source/__init__.py b/17/replication_package/code/data/source/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/data/source/build_master/__init__.py b/17/replication_package/code/data/source/build_master/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/data/source/build_master/builder.py b/17/replication_package/code/data/source/build_master/builder.py new file mode 100644 index 0000000000000000000000000000000000000000..9a3ae595ee631d8395af1ced50f40706aa1fa118 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/builder.py @@ -0,0 +1,328 @@ +from datetime import datetime,timedelta +import pandas as pd +import os +import sys +import git +from pympler.tracker import SummaryTracker + +#importing modules from root of data +root = git.Repo('.', search_parent_directories = True).working_tree_dir +sys.path.append(root) +os.chdir(os.path.join(root)) + +from lib.data_helpers.pull_events import PullEvents +from lib.utilities import serialize +from data.source.build_master.pullers.pull_events_use import PullEventsUse +from data.source.build_master.pullers.pull_events_alt import PullEventsAlt + + +from lib.data_helpers.clean_events import CleanEvents +from data.source.build_master.cleaners.clean_surveys import CleanSurveys +from data.source.build_master.cleaners.clean_events_use import CleanEventsUse +from data.source.build_master.cleaners.clean_events_status import CleanEventsStatus +from data.source.build_master.cleaners.clean_events_budget import CleanEventsBudget +from data.source.build_master.cleaners.clean_events_snooze_delays import CleanEventsSnoozeDelays +from data.source.build_master.cleaners.clean_events_snooze import CleanEventsSnooze +from data.source.build_master.cleaners.clean_events_alt import CleanEventsAlt + + + + +from lib.data_helpers.gaming import Gaming +from data.source.build_master.master_raw_user import MasterRawUser +from data.source.build_master.master_raw_user_day_app import MasterRawUserDayApp + +from lib.experiment_specs import study_config + +""" + +""" +class Builder(): + + @staticmethod + def build_master(): + tracker = SummaryTracker() + + # print(f"\n Clean Survey Data {datetime.now()}") + # clean_surveys = CleanSurveys.clean_all_surveys() + + # print(f"\nInitializing Master DF and add survey data {datetime.now()}") + # raw_user = MasterRawUser(initial_survey_df= clean_surveys[study_config.initial_master_survey]) + # raw_user.add(clean_surveys) + # del clean_surveys + + # print(f"\nCleaning Traditional Use and DetectGaming {datetime.now()}") + # trad_use_phase, trad_use_hour = Builder._build_pd_use() + + # game_df = Gaming.process_gaming(error_margin=1, + # hour_use=trad_use_hour, + # raw_user_df=raw_user.raw_master_df) + # raw_user.add({"Game": game_df}) + + # tracker.print_diff() + # del [trad_use_phase, game_df] + # tracker.print_diff() + + # if datetime.now() > study_config.surveys["Midline"]["Start"]: + # print(f"\nCleaning Limit Data {datetime.now()}") + # pd_snooze = Builder._build_pd_snooze() + # budget_phase, pd_budget = Builder._build_pd_budget() + # try: + # Builder._build_pd_snooze_delay() + # except: + # print("couldn't process snooze delay data") + + # raw_user.add({"PDBudget": budget_phase}) + # else: + # pd_budget = pd.DataFrame() + # pd_snooze = pd.DataFrame() + + print(f"\nCleaning Traditional Use Individual {datetime.now()}") + Builder._build_pd_use_indiv() + + # print(f"\n Alternative and Status Data {datetime.now()}") + # alt_use_hour, alt_use_phase = Builder._build_pd_alt(trad_use_hour) + # raw_user.add({"AltPDUse": alt_use_phase}) + + # clean_status, pd_latest = Builder._build_pd_status(raw_user.raw_master_df,alt_use_hour) + # raw_user.add({"LatestPD": pd_latest}) + # del [alt_use_phase, pd_latest] + + # print(f"\n Serialize user level data before building user-app-day data") + # config_user_dict = serialize.open_yaml("config_user.yaml") + # if config_user_dict['local']['test'] == False: + # serialize.save_pickle(raw_user.raw_master_df, + # os.path.join("data", "external", "intermediate", "MasterIntermediateUser")) + + # print(f"\n Create UserXAppXDate Level data {datetime.now()}") + # MasterRawUserDayApp.build(alt_use_hour,pd_budget,pd_snooze,clean_status) + + # tracker.print_diff() + # del [pd_budget,pd_snooze,alt_use_hour] + + # print(f"\n Recover Old Install Data") + # PullEventsAlt.recover_install_data() + # return raw_user.raw_master_df + + + @staticmethod + def _build_pd_use(): + pd_use_puller = PullEvents(source="PhoneDashboard", + keyword="Use", + scratch=False, + test=False, + time_cols=["Created", "Recorded"], + raw_timezone="Local", + appcode_col='Source', + identifying_cols=["AppCode", "ForegroundApp", "ScreenActive", + "CreatedDatetimeHour"], + sort_cols= ["CreatedDatetimeHour","RecordedDatetimeHour"], + drop_cols= ["PlayStoreCategory","UploadLag"], + cat_cols = ["ForegroundApp"], + compress_type="txt", + processing_func=PullEventsUse.process_raw_use) + + raw_hour_use = pd_use_puller.update_data() + + use_cleaner = CleanEvents(source="PhoneDashboard", keyword="Use") + use_phase, use_hour = use_cleaner.clean_events(raw_event_df=raw_hour_use, + date_col="CreatedDate", + cleaner=CleanEventsUse(use_type="Traditional")) + + CleanEventsUse.get_timezones(use_hour, "CreatedDatetimeHour", "CreatedEasternDatetimeHour") + + + return use_phase, use_hour + + @staticmethod + def _build_pd_use_indiv(): + pd_use_puller = PullEvents(source="PhoneDashboard", + keyword="UseIndiv", + scratch=True, + test=False, + time_cols=["Created", "Recorded"], + raw_timezone="Local", + appcode_col='Source', + identifying_cols=["AppCode", "ForegroundApp", "StartTime", "UseMinutes"], + sort_cols= ["StartTime"], + drop_cols= ["PlayStoreCategory","UploadLag"], + cat_cols = ["ForegroundApp"], + compress_type="txt", + processing_func=PullEventsUse.process_raw_use_indiv) + + raw_hour_use = pd_use_puller.update_data() + + # use_cleaner = CleanEvents(source="PhoneDashboard", keyword="Use") + # use_phase, use_hour = use_cleaner.clean_events(raw_event_df=raw_hour_use, + # date_col="CreatedDate", + # cleaner=CleanEventsUse(use_type="Traditional")) + + # CleanEventsUse.get_timezones(use_hour, "CreatedDatetimeHour", "CreatedEasternDatetimeHour") + + + @staticmethod + def _build_pd_status(raw_master: pd.DataFrame, alt_use_hour: pd.DataFrame): + pd_use_puller = PullEvents(source="PhoneDashboard", + keyword="Status", + scratch=False, + test=False, + time_cols=["LastUpload"], + raw_timezone="Local", + appcode_col='Participant', + identifying_cols=["AppCode", "Group", "Blocker", + "LastUpload", "AppVersion","PlatformVersion","PhoneModel","OptedOut"], + sort_cols = ["LastUpload"], + drop_cols = ['PhaseUseBrowser(ms)', + 'PhaseUseFB(ms)', + 'PhaseUseIG(ms)', + 'PhaseUseOverall(ms)', + 'PhaseUseSnap(ms)', + 'PhaseUseYoutube(ms)',"AsOf"], + cat_cols = [], + compress_type="txt",) + + raw_status = pd_use_puller.update_data() + raw_status["LastUploadDate"] = raw_status["LastUpload"].apply(lambda x: x.date()) + use_cleaner = CleanEvents(source="PhoneDashboard", keyword="Status") + clean_status = use_cleaner.clean_events(raw_event_df=raw_status, + date_col="LastUploadDate", + cleaner=CleanEventsStatus(), + phase_data=False) + + pd_latest = CleanEventsStatus.get_latest_pd_health(clean_status, raw_master, alt_use_hour) + return clean_status, pd_latest + + @staticmethod + def _build_pd_alt(clean_trad_use_hour): + alt_json_reader = PullEventsAlt() + pd_alt_puller = PullEvents(source="PhoneDashboard", + keyword="Alternative", + scratch=False, + test=False, + time_cols=["Created"], + raw_timezone="Local", + appcode_col='AppCode', + identifying_cols=["AppCode", "ForegroundApp", "CreatedDatetimeHour"], + sort_cols = ["Observed","CreatedDatetimeHour"], + drop_cols = ["Com.AudaciousSoftware.PhoneDashboard.AppTimeBudget", "Timezone", + "CreatedDatetime","CreatedEasternDatetime","Label", "CreatedDate", + "PlayStoreCategory","DaysObserved","Index","ZipFolder","CreatedEasternMinusLocalHours"], + cat_cols = ["ForegroundApp"], + compress_type="folder", + processing_func=alt_json_reader.process_raw_use, + file_reader=alt_json_reader.read_alt) + + # This function will read in and update all types of alternative data, will only return the use data + # and will serialize all other data + raw_alt_use_hour = pd_alt_puller.update_data() + try: + combined_raw_alt_use_hour = PullEventsAlt.combine_trad_alt(raw_alt_use_hour,clean_trad_use_hour) + except: + print("could not combine trad and alt") + combined_raw_alt_use_hour = raw_alt_use_hour.copy() + + use_cleaner = CleanEvents(source="PhoneDashboard", keyword="Alternative") + use_phase, use_hour = use_cleaner.clean_events(raw_event_df=combined_raw_alt_use_hour, + date_col="CreatedDate", + cleaner=CleanEventsUse(use_type="Alternative")) + + config_user_dict = serialize.open_yaml("config_user.yaml") + if config_user_dict['local']['test']== False: + try: + print(f"\n Clean Alt Install data events {datetime.now()}") + CleanEventsAlt.process_appcode_files( + input_folder = os.path.join("data", "external", "input", "PhoneDashboard", "RawAltInstall"), + output_file = os.path.join("data", "external", "intermediate", "PhoneDashboard", "AltInstall"), + cleaning_function= CleanEventsAlt.clean_install + ) + except: + print("could not aggregate install data") + return use_hour, use_phase + + @staticmethod + def _build_pd_budget(): + """processes the limit setting data""" + pd_budget_puller = PullEvents(source="PhoneDashboard", + keyword="Budget", + scratch=False, + test=False, + time_cols=["Updated","EffectiveDate"], + raw_timezone="Local", + appcode_col="Source", + identifying_cols=["AppCode", "App", "Updated", "EffectiveDate"], + sort_cols=["Updated"], + drop_cols = [], + cat_cols = [], + compress_type="txt") + + pd_budget = pd_budget_puller.update_data() + + budget_cleaner = CleanEvents(source="PhoneDashboard", keyword="Budget") + clean_budget = budget_cleaner.clean_events(raw_event_df=pd_budget, + date_col="EffectiveDate", + cleaner=CleanEventsBudget(), + phase_data = False) + + budget_sum = CleanEventsBudget.get_latest_budget_data(clean_budget) + + return budget_sum, clean_budget + + @staticmethod + def _build_pd_snooze_delay(): + """process the custom snooze data (post study functionality)""" + pd_snooze_delay_puller = PullEvents(source="PhoneDashboard", + keyword="Delays", + scratch = False, + test = False, + time_cols=["UpdatedDatetime", "EffectiveDatetime"], + raw_timezone = "Local", + appcode_col="App Code", + identifying_cols=["AppCode", "SnoozeDelay", "UpdatedDatetime"], + sort_cols = ["UpdatedDatetime"], + drop_cols= [], + cat_cols = [], + compress_type="txt") + + raw_delayed_snooze = pd_snooze_delay_puller.update_data() + snooze_delay_cleaner = CleanEvents(source="PhoneDashboard", keyword="Delays") + + clean_delays = snooze_delay_cleaner.clean_events(raw_event_df=raw_delayed_snooze, + date_col= "EffectiveDate", + cleaner= CleanEventsSnoozeDelays(), + phase_data=False) + + clean_delays.to_csv(os.path.join("data","external", "intermediate", "PhoneDashboard", "Delays.csv")) + + + + @staticmethod + def _build_pd_snooze(): + """processes the snooze event data""" + pd_snooze_puller = PullEvents(source="PhoneDashboard", + keyword="Snooze", + scratch = False, + test = False, + time_cols=["Recorded", "Created"], + raw_timezone = "Local", + appcode_col="Source", + identifying_cols=["AppCode", "App", "Event", "Created"], + sort_cols = ["Created"], + drop_cols= [], + cat_cols = [], + compress_type="txt") + + raw_snooze = pd_snooze_puller.update_data() + + snooze_cleaner = CleanEvents(source="PhoneDashboard", keyword="Snooze") + + pd_snooze = snooze_cleaner.clean_events(raw_event_df=raw_snooze, + date_col= "Date", + cleaner= CleanEventsSnooze(), + phase_data=False) + + CleanEventsSnooze.get_premature_blocks(pd_snooze) + + return pd_snooze + +if __name__ == "__main__": + pd_snooze = Builder._build_pd_snooze_delay() \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_events_alt.py b/17/replication_package/code/data/source/build_master/cleaners/clean_events_alt.py new file mode 100644 index 0000000000000000000000000000000000000000..c54bc9b3d955e94639629412ea3ac30ba047f1b8 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_events_alt.py @@ -0,0 +1,150 @@ +import os +import json +import git +import sys +import pandas as pd + +#importing modules from root of data +root = git.Repo('.', search_parent_directories = True).working_tree_dir +sys.path.append(root) +os.chdir(os.path.join(root)) + +from lib.experiment_specs import study_config +from lib.data_helpers import data_utils + +from lib.data_helpers.builder_utils import BuilderUtils +from lib.utilities import serialize + +class CleanEventsAlt(): + + @staticmethod + def process_appcode_files(input_folder,output_file,cleaning_function): + """ + inputs: + - input_folder: directory where all pickle files will be read and appeneded + - outpul_file: the directory where the output file will be saves + - cleaning_function: the function used to clean the aggregated data + """ + appcodes = [x for x in os.listdir(input_folder) if ".pickle" in x] + df_list = [] + print(appcodes[:5]) + for appcode in appcodes: + path = os.path.join(input_folder, appcode) + try: + a_df = serialize.open_pickle(path) + if len(a_df) == 0: + continue + df_list.append(a_df) + + #try: + # d = serialize.open_hdf(path.replace(".pickle",".h5")) + #except: + # print(f"could not open {appcode} h5 file!!!!, but pickle opened without problems") + + except: + print(f"could not read {appcode} raw install pickle data") + + if len(df_list) > 0: + df = pd.concat(df_list).reset_index(drop=True) + df = cleaning_function(df) + + try: + serialize.save_hdf(df, output_file) + except: + print("Couldn't save hdf") + + try: + df.to_csv(output_file + ".csv", index=False) + except: + print("couldn't save csv file") + + + @staticmethod + def clean_install(a_df): + # add column to indiciate if app is FITSBY + a_df = data_utils.add_A_to_appcode(a_df, "AppCode") + duplicate_fitsby_apps = pd.read_excel(os.path.join("lib", "experiment_specs", "FITSBY_apps.xlsx")) + a_df = a_df.merge(duplicate_fitsby_apps, on='App', how='left') + a_df = a_df.drop_duplicates(subset = ["AppCode","App","Date"]) + return a_df + + ################ + #####OLD######### + ############### + @staticmethod + def process_appcode_files_OLD(input_folder,output_file,cleaning_function): + appcodes = [x for x in os.listdir(input_folder) if ".pickle" in x] + df_list = [] + for appcode in appcodes: + path = os.path.join(input_folder, appcode) + a_dict = serialize.open_pickle(path, df_bool=False) + if len(a_dict) == 0: + continue + + a_df = cleaning_function(a_dict) + a_df["AppCode"] = "A" + appcode.replace(".pickle", "") + df_list.append(a_df) + + if len(df_list)>0: + df = pd.concat(df_list).reset_index(drop = True) + try: + serialize.save_pickle(df, output_file) + except: + print("Couldn't save Pickle") + + try: + # DONT PUT IN TRY BECAUSE IF BACKUP FAILS, WE WANT TO RE PROCESS THE NEW FILES + df.to_csv(output_file+".csv", index=False, compression='gzip') + except: + print("couldn't save zip file") + + return df + + else: + print("no alt data yet!") + return pd.DataFrame() + + @staticmethod + def clean_alt_block_data(a_dict): + a_df = pd.DataFrame.from_dict(a_dict, orient='index').reset_index().rename(columns={"index": "Created", "app": "App"}) + a_df = data_utils.clean_iso_dates(a_df, 'Created') + a_df["AltLimitMinutes"] = a_df["time_budget"] / (1000 * 60) + a_df["AltUseMinutesAtEvent"] = a_df["time_usage"] / (1000 * 60) + return a_df + + @staticmethod + def clean_warnings(a_dict): + df = pd.DataFrame.from_dict(a_dict, orient='index').reset_index().rename(columns={"date": "Created"}).drop( + columns='index') + df = data_utils.clean_iso_dates(df, 'Created') + + df['details_dict'] = df['details'].apply(lambda x: json.loads(x)) + chars_df = df["details_dict"].apply(pd.Series) + assert len(chars_df) == len(df) + df = pd.concat([df, chars_df], axis=1) + df = df.drop(columns=["details_dict", "details"]) + + df = df.rename(columns = {"event": "Event", + "minutes-remaining": "MinutesRemaining", + "package": "App", + "snooze-delay": "SnoozeDelay", + "snooze-minutes": "SnoozeMinutes"}) + + event_rename = {"app-block-warning": "App Warning Displayed", + "app-blocked-can-snooze": "App Blocked - Snooze Offered", + "app-blocked-delayed": "App Blocked Until Delay Elapsed", + "app-blocked-no-snooze": "App Blocked - Snooze Unavailable", + "app-blocked-no-snooze-closed": "User Closed App Blocked (No Snooze) Warning", + "cancelled-snooze": "User Cancelled Snooze", + "closed-delay-warning": "User Closed Delay Warning", + "closed-warning": "User Closed Warning", + "skipped-snooze": "User Declined Snooze", + "snoozed-app-limit": "Snooze Enabled"} + df["Event"] = df["Event"].apply(lambda x: event_rename[x]) + return df + +if __name__ == "__main__": + input_folder = os.path.join("data","external","input","PhoneDashboard","RawAltInstall") + output_file = os.path.join("data", "external", "intermediate", "PhoneDashboard", "AltInstall") + CleanEventsAlt.process_appcode_files(input_folder,output_file,CleanEventsAlt.clean_install) + print('donzo') \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_events_budget.py b/17/replication_package/code/data/source/build_master/cleaners/clean_events_budget.py new file mode 100644 index 0000000000000000000000000000000000000000..0f992ab34e4776b5f38a52330f563c8b0532c40a --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_events_budget.py @@ -0,0 +1,58 @@ +import sys +import os +from datetime import datetime, timedelta +import numpy as np +import pandas as pd + +from lib.experiment_specs import study_config +from lib.data_helpers import data_utils +from lib.utilities import codebook +from lib.utilities import serialize + +from lib.data_helpers.builder_utils import BuilderUtils + +"""" +The new use cleaner, which will deprecate phone_use_cleaner, and phase_use +""" +class CleanEventsBudget(): + + clean_file = os.path.join("data","external","intermediate","PhoneDashboard","CleanBudget") + + def prep_clean(self, df): + # Update Use Data + df["SawLimitSettingPage"] = True + + df.loc[df["App"] != "placeholder.app.does.not.exist","HasSetLimit"] = True + #df = df.loc[df["App"] != "placeholder.app.does.not.exist"] + df["EffectiveDate"] = df["EffectiveDate"].dt.date + df = df.loc[(df["EffectiveDate"] >= study_config.surveys["Midline"]["Start"].date())] + + return df + + """Called in the Event Cleaner, after the data has been subsetted to a given phase""" + def phase_clean(self, df, phase): + # prep + #summarize + ## print("hello") + return df + + @staticmethod + def get_latest_budget_data(clean_budget_df): + df = clean_budget_df[["AppCode", "HasSetLimit", "SawLimitSettingPage"]].groupby( + ["AppCode"]).first().reset_index() + + apps = clean_budget_df.groupby(["AppCode", "App"])["NewLimit"].last().reset_index() + apps = apps.loc[apps["App"].isin(study_config.fitsby)] + apps["LimitMinutes"] = apps["NewLimit"] / (60 * 1000) + apps["App"] = apps["App"].apply(lambda x: x.capitalize()) + + apps_p = apps.pivot_table(index=["AppCode"], + values=["LimitMinutes"], + columns=["App"], + aggfunc='first') + apps_p.columns = [''.join(col[::1]).strip() for col in apps_p.columns.values] + apps_p = apps_p.reset_index() + + df = df.merge(apps_p, on = "AppCode", how = "left") + return df + diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_events_pc.py b/17/replication_package/code/data/source/build_master/cleaners/clean_events_pc.py new file mode 100644 index 0000000000000000000000000000000000000000..bd3c174a396b166939265b6c41985a501383838c --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_events_pc.py @@ -0,0 +1,60 @@ +import sys +import os +from datetime import datetime, timedelta + +from lib.data_helpers.builder_utils import BuilderUtils +from lib.utilities import serialize + +class CleanEventsPC(): + + def __init__(self): + self.social_hosts = [] + self.use_subsets = { + "WebDesktop": { + "Filters": {"WebBool":[True]}, + "DenomCol": "DaysWithWeb", + "NumCols": ["UseMinutes"]}, + + "FBDesktop" :{ + "Filters": {"FBBool": [True]}, + "DenomCol": "DaysWithWeb", + "NumCols": ["UseMinutes"]}, + + "IGDesktop":{ + "Filters": {"IGBool": [True]}, + "DenomCol": "DaysWithWeb", + "NumCols": ["UseMinutes"]} + } + + + def prep_clean(self,df): + # Rename some things + df["UseMinutes"] = df["Duration"]/60 + df = df.drop(columns = ["Duration"]) + + # create date variable + df["StartedOnIsoDate"] = df["StartedOnIso"].apply(lambda x: x.date()) + df["EndedOnIsoDatetime"] = df["StartedOnIso"]+df["UseMinutes"].apply(lambda x: timedelta(seconds = x*60)) + + # label treatment webistes + df.loc[df["Website"].notnull(), "WebBool"] = True + df.loc[df["Website"].fillna("nan").str.contains("facebook"), "FBBool"] = True + df.loc[df["Website"].fillna("nan").str.contains("instagram"), "IGBool"] = True + + # Create List of hosts, ordered by popularity + top_hosts = df.groupby(['Website'])['AsOf'].agg(['count']) + top_hosts = top_hosts.rename(columns={'count': "WebsiteVisitCount"}).reset_index().sort_values( + by="WebsiteVisitCount", ascending=False) + top_hosts.to_csv(os.path.join("data","external","intermediate","PCDashboard", "TopSites.csv"), + index=False) + + # get social hosts + self.social_hosts = [y for y in list(top_hosts["Website"]) if any(x in y for x in study_config.social_websites)] + return df + + """Called in the Event Cleaner, after the data has been subsetted to a given phase""" + def phase_clean(self, df, phase): + df["WebDay"] = df["StartedOnIso"].apply(lambda x: x.date()) + df.loc[:, "DaysWithWeb"] = df.groupby(by=['AppCode'])['WebDay'].transform(lambda x: x.nunique()) + df = BuilderUtils.get_subsets_avg_use(df, self.use_subsets) + return df diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_events_snooze.py b/17/replication_package/code/data/source/build_master/cleaners/clean_events_snooze.py new file mode 100644 index 0000000000000000000000000000000000000000..a8cffa4ca46e6bb4902480f1fd5aa620b958118a --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_events_snooze.py @@ -0,0 +1,43 @@ +from lib.data_helpers.builder_utils import BuilderUtils +import os +from datetime import datetime,timedelta +from lib.experiment_specs import study_config + +"""" +The new use cleaner, which will deprecate phone_use_cleaner, and phase_use +""" +class CleanEventsSnooze(): + + clean_file = os.path.join("data","external","intermediate","PhoneDashboard","CleanSnooze") + + def prep_clean(self, df): + df["Date"] = df["Created"].dt.date + df = df.loc[(df["Date"] >= study_config.surveys["Midline"]["Start"].date())] + return df + + @staticmethod + def get_premature_blocks(sn): + ud_s = sn.groupby(["AppCode", "Date", "App"]).first().reset_index() + + bug = ud_s.loc[~ud_s["Event"].isin(["App Warning Displayed"])] + bug = bug.loc[bug["Created"] > datetime(2020, 5, 2, 0, 0), ["AppCode", "Created", "Date", "App", "Event", + "SnoozeExtension"]] + print(bug["Event"].value_counts()) + + # look for user-days for which last event was a display that the user didn't close + ud_l = sn.groupby(["AppCode", "Date", "App"]).last().reset_index() + b_p = ud_l.loc[ud_l["Event"].isin(["App Blocked - Snooze Offered", + "App Blocked - Snooze Unavailable", + "App Warning Displayed"])] + b_p["NextDate"] = b_p["Date"].apply(lambda x: x + timedelta(1)) + b_p = b_p.rename(columns={"Event": "YesterdayEvent", "App": "YesterdayApp", + "Created": "YesterdayCreated", + "SnoozeExtension": "YesterdaySnoozeExtension"}) + + bug = bug.merge(b_p[["AppCode", "YesterdayApp", "NextDate", "YesterdayEvent", "YesterdayCreated", + "YesterdaySnoozeExtension"]], + right_on=["AppCode", "NextDate", "YesterdayApp"], + left_on=["AppCode", "Date", "App"], + how='left') + + bug.to_csv(os.path.join("data","external","intermediate","Scratch","3b_SnoozeEvent.csv")) \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_events_snooze_delays.py b/17/replication_package/code/data/source/build_master/cleaners/clean_events_snooze_delays.py new file mode 100644 index 0000000000000000000000000000000000000000..dd41fa1c0bf83b11277c263dd267ee4b77a244e5 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_events_snooze_delays.py @@ -0,0 +1,16 @@ + +class CleanEventsSnoozeDelays(): + + def __init__(self): + empty = [] + + def prep_clean(self,df): + # Rename some things + df['EffectiveDate'] = df['EffectiveDatetime'].apply(lambda x: x.date()) + return df + + """Called in the Event Cleaner, after the data has been subsetted to a given phase""" + def phase_clean(self, df, phase): + return df + + diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_events_status.py b/17/replication_package/code/data/source/build_master/cleaners/clean_events_status.py new file mode 100644 index 0000000000000000000000000000000000000000..f06da1e8928570714eb6117dfcb6e545d93027f8 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_events_status.py @@ -0,0 +1,59 @@ +import os +import pandas as pd +from lib.data_helpers.builder_utils import BuilderUtils +from lib.utilities import serialize +from lib.experiment_specs import study_config +from lib.data_helpers import data_utils +from datetime import datetime, timedelta + +class CleanEventsStatus(): + + def __init__(self): + self.social_hosts = [] + + def prep_clean(self,df): + # Rename some things + return df + + """Called in the Event Cleaner, after the data has been subsetted to a given phase""" + def phase_clean(self, df, phase): + #df = df.sort_values(by = ["LastUpload"]) + #df_l = df.groupby(["AppCode"]).last().reset_index() + return df + + @staticmethod + def get_latest_pd_health(st, mr, uah): + #OptedOut represents if the opted out of limits, and EmailEnabled suggests if Chris sends user automatic emails + # about missing data + + keep_cols = ["AppCode","PhoneModel","AppVersion","PlatformVersion","LastUpload","Server", + "OptedOut","E-MailEnabled"] + st_l = st.sort_values(["LastUpload"]).groupby(["AppCode"]).last().reset_index() + st_l["Server"] = st_l["Zipfile"].apply(lambda x: x.split("_")[1] if "_" in x else "nan") + st_l = st_l[keep_cols] + + lt = uah.loc[uah["CreatedDate"] >= study_config.active_threshold].groupby(["AppCode"])["UseMinutes"].sum() + l = st_l.merge(lt, on = "AppCode", how = 'outer') + + + ############ + #status indicates how pd data looks since study_config.active_threshold + ############ + last_survey_complete = data_utils.get_last_survey() + code = study_config.surveys[last_survey_complete]["Code"] + + #only code latest status for people that completed the last survey that ended + l = l.merge(mr.loc[mr[f"{code}_Complete"]=="Complete",["AppCode",f"{code}_Complete"]], on = "AppCode", how = 'right') + + # has use data in past few days + l.loc[l["UseMinutes"]>0, "ActiveStatus"] = "Normal" + + #i.e. no use data, but is status export + l.loc[l["ActiveStatus"].isnull(), "ActiveStatus"] = "NoUseDataLately" + + #is also missing pd status data + l.loc[(l["ActiveStatus"]=="NoUseDataLately") & (l["PhoneModel"].isnull()),"ActiveStatus"]= "NoPDDataAtAll" + + l = l[keep_cols+["ActiveStatus"]] + + return l \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_events_use.py b/17/replication_package/code/data/source/build_master/cleaners/clean_events_use.py new file mode 100644 index 0000000000000000000000000000000000000000..fa092c3e4a797acabe252d81745c7c5f5ac88656 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_events_use.py @@ -0,0 +1,148 @@ +import os +import numpy as np +import pandas as pd +from lib.experiment_specs import study_config +from lib.data_helpers import data_utils + +from lib.data_helpers.builder_utils import BuilderUtils +from lib.utilities import serialize +from datetime import datetime, timedelta + +"""" +Contains functions that will process phone use data. The functions are inputted into the CleanEvents class. +See builder.py to see how they are implemented + + - Input: raw use (alt or traditional) + - Functions: + - filters out appcodes not in study + - removes screen inactive data (for traditional use only) + - creates phase specific avg daily use variables + + - Output: + - phase-level use + - user-app-hour use +""" +class CleanEventsUse(): + + def __init__(self,use_type): + self.use_type = use_type + self.class_hours = list(range(9, 16)) + self.sleep_hours = list(range(23, 25)) + list(range(1, 8)) + self.week_day = list(range(1, 6)) + self.use_subsets = { + + "": { + "Filters": {}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + "FITSBY": { + "Filters": {"App": study_config.fitsby}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + "Facebook": { + "Filters": {"App": ["facebook"]}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + "Instagram": { + "Filters": {"App": ["instagram"]}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + "Twitter": { + "Filters": {"App": ["twitter"]}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + "Snapchat": { + "Filters": {"App": ["snapchat"]}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + "Browser": { + "Filters": {"App": ["browser"]}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + "Youtube": { + "Filters": {"App": ["youtube"]}, + "DenomCol": "DaysWithUse", + "NumCols": ["UseMinutes"]}, + + } + + # Use variables only calculated post endline + self.pe_use_subsets = { + + "FirstMobile": { + "Filters": {"First": [True]}, + "DenomCol": "FirstDaysWithUse", + "NumCols": ["UseMinutes"]}, + + "SecondMobile": { + "Filters": {"Second": [True]}, + "DenomCol": "SecondDaysWithUse", + "NumCols": ["UseMinutes"]}, + + } + + def prep_clean(self, df): + # Update Use Data + df["UseMinutes"] = df["Duration"] / (1000 * 60) + df = df.drop(columns = ["Duration"]) + + # Update Time Data + df["CreatedDate"] = df["CreatedDatetimeHour"].dt.date + + # Only Include Screen Active Data + if self.use_type == "Traditional": + df = df.loc[df["ScreenActive"]==1] + + if self.use_type == "Alternative": + if "BackupMinutes" in df.columns: + #adds in the missing launcher data (and any other missing data on the datetime hour level) + df.loc[(df["BackupMinutes"].notnull()) & (df["UseMinutes"].isnull()),"BackupBool"] = True + df.loc[df["BackupBool"].isnull(),"BackupBool"] = False + df.loc[df["BackupBool"]==True,"UseMinutes"] = df.loc[df["BackupBool"]==True,"BackupMinutes"] + + # The alternative data should not have any data from yesterday... likely from east coast exports at 3am + print(f"{len(df)} before removing yesterday data") + yesterday = datetime.now().date() - timedelta(1) + df = df.loc[df["CreatedDate"] < yesterday] + + + # Update App Data + df = df.rename(columns={"ForegroundApp": "App"}) + return df + + """Called in the Event Cleaner, after the data has been subsetted to a given phase""" + def phase_clean(self, df, phase): + # prep + df.loc[:, "DaysWithUse"] = df.groupby(by=['AppCode'])['CreatedDate'].transform(lambda x: x.nunique()) + df.loc[:, "FirstCreated"] = df.groupby(by=['AppCode'])['CreatedDate'].transform(min) + df.loc[:, "LastCreated"] = df.groupby(by=['AppCode'])['CreatedDate'].transform(max) + + # summarize + cat_df = df.groupby("AppCode")['FirstCreated','LastCreated','DaysWithUse'].first().reset_index() + sum_df = BuilderUtils.get_subsets_avg_use(df, self.use_subsets) + sum_df = sum_df.merge(cat_df, how = 'outer', on = "AppCode") + return sum_df + + @staticmethod + def get_timezones(df, timecol,eastern_timecol): + # Get the modal timezone for each user to adjust the eastern survey times to local time of user + df = df[["AppCode",timecol,eastern_timecol]].sort_index(by=[timecol]) + df["EastToLocal"] = df[timecol] - df[eastern_timecol] + df = df.loc[df["EastToLocal"].notnull()] + + df = df.loc[(df["EastToLocal"].notnull())] + df_sub = df.groupby(["AppCode", "EastToLocal"])[timecol].count().reset_index().sort_values(by=timecol, + ascending=False) + df_sub_m = df_sub.groupby(["AppCode"])["EastToLocal"].first().reset_index() + + config_user_dict = serialize.open_yaml("config_user.yaml") + if config_user_dict['local']['test']==False: + serialize.save_pickle(df_sub_m, os.path.join("data", "external", "intermediate", "Timezones")) + df_sub_m.to_csv(os.path.join("data", "external", "intermediate", "Timezones.csv")) \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/cleaners/clean_surveys.py b/17/replication_package/code/data/source/build_master/cleaners/clean_surveys.py new file mode 100644 index 0000000000000000000000000000000000000000..9a00df496cd2bf224410cb400ca053320543961d --- /dev/null +++ b/17/replication_package/code/data/source/build_master/cleaners/clean_surveys.py @@ -0,0 +1,37 @@ +import pandas as pd +import os +from lib.experiment_specs import study_config +from lib.data_helpers.clean_survey import CleanSurvey + +class CleanSurveys(): + + @staticmethod + def clean_all_surveys(): + cleaned_surveys = {} + for survey, specs in study_config.surveys.items(): + if survey in study_config.filler_surveys: + continue + qualtrics_cleaner = CleanSurvey(survey_name = survey, + input_dir = os.path.join("data","external","dropbox_confidential","Surveys"), + output_dir = os.path.join("data","external","intermediate","Surveys")) + + if os.path.exists(qualtrics_cleaner.raw_file_path): + cleaned_surveys[survey] = qualtrics_cleaner.clean_survey() + + return cleaned_surveys + + @staticmethod + def _append_surveys(cleaned_surveys,survey1,survey2,name): + s1 = cleaned_surveys[survey1] + s2 = cleaned_surveys[survey2] + assert study_config.surveys[survey1]["Code"] == study_config.surveys[survey2]["Code"] + s_comp = s1.append(s2, sort = False) + + # drop unmerged dfs from memory + for old in [survey1, survey2]: + cleaned_surveys.pop(old, None) + + cleaned_surveys[name] = s_comp + + s_comp.to_csv(os.path.join("data", "external", "intermediate", "Surveys", f"{name}.csv"), index=False) + return cleaned_surveys \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/master_raw_user.py b/17/replication_package/code/data/source/build_master/master_raw_user.py new file mode 100644 index 0000000000000000000000000000000000000000..9fb30648b690d818ccaf5bbc0d0d371d46768c87 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/master_raw_user.py @@ -0,0 +1,69 @@ +from lib.experiment_specs import study_config +from lib.data_helpers import test + +from lib.utilities import serialize + +import os +import pandas as pd + +class MasterRawUser(): + raw_file = os.path.join("data","external","intermediate", "MasterRawUser") + raw_test_file = os.path.join("data","external","intermediate_test", "MasterRawUser") + + def __init__(self,initial_survey_df): + self.merge_col = "AppCode" + self.files_added = [] + self.main_cols_added = [] + self.raw_master_df = self.initialize_master(initial_survey_df) + self.config_user_dict = serialize.open_yaml("config_user.yaml") + + """initializes master df with the initial survey""" + def initialize_master(self,initial_survey_df): + master = pd.DataFrame() + print("Initializing Master with {0}".format(study_config.initial_master_survey)) + initial_survey_df = self.prep_df(initial_survey_df) + master = master.append(initial_survey_df) + return master + + """Adds additional data to master_contacts (runs after initializing)""" + def add(self, clean_data_frames): + master = self.raw_master_df + + for name, df in clean_data_frames.items(): + if name == study_config.initial_master_survey: + continue + + df = self.prep_df(df) + print("Adding {0}".format(name)) + + master = master.merge(df, how="outer", left_on=self.merge_col, right_on=self.merge_col) + self.files_added.append(name) + print(f"\t added {name} to master") + + if self.config_user_dict['local']['test']: + test.save_test_df(master,self.raw_test_file) + else: + serialize.save_pickle(master,self.raw_file) + master.to_csv(self.raw_file+".csv", index = False) + + self.raw_master_df = master + + """preps a df for merge to master""" + def prep_df(self, df): + """ensure the df has the merge col""" + + for column in df.columns: + + # remove any main cols that have already been added to master, except for the merge column""" + if column in study_config.main_cols: + if column not in self.main_cols_added: + self.main_cols_added.append(column) + continue + elif column == self.merge_col: + continue + else: + try: + df = df.drop(columns=[column]) + except: + print("failed to drop a main col") + return df \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/master_raw_user_day_app.py b/17/replication_package/code/data/source/build_master/master_raw_user_day_app.py new file mode 100644 index 0000000000000000000000000000000000000000..7d0fde3c277755752c4ebffa3a4da24e198c67a3 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/master_raw_user_day_app.py @@ -0,0 +1,196 @@ +import os +from functools import reduce +import pandas as pd +import numpy as np +from datetime import datetime, timedelta + +import sys +import git + +#importing modules from root of data +root = git.Repo('.', search_parent_directories = True).working_tree_dir +sys.path.append(root) +os.chdir(os.path.join(root)) + +from lib.experiment_specs import study_config +from lib.data_helpers.builder_utils import BuilderUtils +from lib.data_helpers import test +from lib.utilities import serialize +from data.source.exporters.stata import Stata +from lib.utilities import codebook + + + + +config_dic = serialize.open_yaml("config.yaml") + +""" +Input: + 1. RawSnooze data on the event level + 2. RawLimit data, on the appcodeXappXnewlimit level + 3. HourLevel use data + +Output: + +Dataframe on the UserXAppXDay level, with the following columns: + - AppCode - UserID + - App - app + - Date - date + - PlayStoreCategory + - DurationMinutes - the minutes the app was used + - Checks - the number of times the app was checked + - LimitMinutes - the cap for the app. If nan, there is no limit + - Apps Blocked - Snooze Offered - number of times the user hit their cap and was offered a snooze + - App Blocked - Snooze Unavailable - app blocked when the user is under the NoSnooze blocker type + - App Blocked Until Delay Elapsed - number of times the user attempted to open their app during the snooze delay + - App Warning - number of times the user was given a limit warning (i.e. 5 mins left and 1 mins left) + - User Closed Warning - the user closed a warning (for t minus 1 or 5 minutes left) + - Snooze Enable - number of times the user snoozed + - User Declined - the number of times the user declined to snooze + - Phase - the phase the user is in. If blank, that means it's a grace day +""" + +class MasterRawUserDayApp: + user_app_day_file = os.path.join("data","external","intermediate","MasterUserAppDay") + test_uad_file = os.path.join("data","external","intermediate_test","MasterUserAppDay") + uad_final_codebook = os.path.join("data","external","final","UserAppDayFinalCodebook.csv") + sort_cols = ["AppCode", "App", "Date"] + + #TODO: Make this have a reset option, so the default will be to just add on a day of data + """ Merges Snooze Counts, active limits, and use data on the day level. Merges on local time zone""" + + @staticmethod + def build(alt_use,budget,snooze,status): + + # prep alt_hour level data + print(f"Prep Use Data {datetime.now()}") + alt_use = alt_use.rename(columns={"CreatedDate": "Date"}) + if "BackupMinutes" in alt_use.columns: + alt_day_use = alt_use.groupby(MasterRawUserDayApp.sort_cols)["UseMinutes","BackupMinutes"].sum().reset_index().rename( + columns={"UseMinutes": "AltUseMinutes"}) + else: + alt_day_use = alt_use.groupby(MasterRawUserDayApp.sort_cols)[ + "UseMinutes"].sum().reset_index().rename( + columns={"UseMinutes": "AltUseMinutes"}) + alt_day_use_dd = MasterRawUserDayApp._mem_redux(alt_day_use) + # dfs.append(alt_day_use_dd) + del [alt_day_use, alt_use] + + dfs = [] + if datetime.now()>study_config.surveys["Midline"]["Start"]: + print(f"Prep Budget Data {datetime.now()}") + budget["Date"] = budget["EffectiveDate"] + budget = budget.loc[budget["App"] != "placeholder.app.does.not.exist"] + budget = budget.sort_values(by = MasterRawUserDayApp.sort_cols) + gp_budget = budget.groupby(MasterRawUserDayApp.sort_cols)["NewLimit"].last().reset_index() + gp_budget_dd = MasterRawUserDayApp._mem_redux(gp_budget) + #dfs.append(gp_budget_dd) + del [gp_budget,budget] + + print(f"Prep Snooze Data {datetime.now()}") + snooze = snooze.sort_values(by=MasterRawUserDayApp.sort_cols) + gp_snooze = snooze.groupby(MasterRawUserDayApp.sort_cols+["Event"])["Date"].agg(["count"]).reset_index() + gp_snooze["count"] = gp_snooze["count"].round(0) + snooze_wide = gp_snooze.pivot_table(index= MasterRawUserDayApp.sort_cols, + values=["count"], + columns=['Event'], + aggfunc='first') + snooze_wide.columns = [''.join(col).strip().replace("count","") for col in snooze_wide.columns.values] + snooze_wide = snooze_wide.reset_index() + + snooze_extension_df = snooze.groupby(MasterRawUserDayApp.sort_cols)["SnoozeExtension"].sum().reset_index() + snooze_extension_df["TotalSnoozeMinutes"] = snooze_extension_df["SnoozeExtension"]/(60*1000) + snooze_wide = snooze_wide.merge(snooze_extension_df.drop(columns = ["SnoozeExtension"]), how = "left", on = MasterRawUserDayApp.sort_cols) + snooze_wide = snooze_wide.fillna(0) + snooze_event_columns = [x for x in snooze_wide.columns if x not in MasterRawUserDayApp.sort_cols] + snooze_wide_dd = MasterRawUserDayApp._mem_redux(snooze_wide) + del [snooze_wide, snooze,snooze_extension_df] + + print(f"Prep Status Data {datetime.now()}") + status = status.rename(columns={"LastUploadDate":"Date"}) + st_dd = status.groupby(["AppCode","Date"]).last().reset_index() + st_dd = st_dd[["AppCode","Date","OptedOut"]] + del [status] + + df = alt_day_use_dd.merge(snooze_wide_dd, on=MasterRawUserDayApp.sort_cols, how = 'outer') + print(f"Completed snooze merge! {datetime.now()}") + + df = df.merge(gp_budget_dd, on=MasterRawUserDayApp.sort_cols, how='outer') + print(f"Completed budget merge! {datetime.now()}") + + df = df.merge(st_dd, on=["AppCode","Date"], how='left') + print(f"Completed status merge! {datetime.now()}") + + for var in snooze_event_columns + ["AltUseMinutes"]: + df[var] = df[var].fillna(0) + df["AppBlocked"] = df["App Blocked - Snooze Offered"] + df["App Blocked - Snooze Unavailable"] + \ + df['App Blocked Until Delay Elapsed'] + + df = BuilderUtils.add_phase_label(raw_df=df, raw_df_date="Date") + df = df.sort_values(by=MasterRawUserDayApp.sort_cols).reset_index(drop=True) + + # fill nan by making latest effective limit the limit by userXapp + df["NewLimit"] = df.groupby(["AppCode", "App"])["NewLimit"].fillna(method='ffill') + + #fill in the missing opt out data (we do it by app, just b/c of the storted order of the df) + df["OptedOut"] = df.groupby(["AppCode", "App"])["OptedOut"].fillna(method='ffill') + + # convert new limit to nan if the user removes limit + df.loc[df["NewLimit"].astype(float) == -1, "NewLimit"] = np.nan + + #if the user has opted out, the limit should be nan + df.loc[df["OptedOut"].astype(float) == 1, "NewLimit"] = np.nan + + df["LimitMinutes"] = df["NewLimit"] / (60 * 1000) + print(f" Cascaded Limit Data {datetime.now()}") + + else: + df = alt_day_use_dd.copy() + + config_user_dict = serialize.open_yaml("config_user.yaml") + test.save_test_df(df, MasterRawUserDayApp.test_uad_file) + if not config_user_dict['local']['test']: + df.to_csv(MasterRawUserDayApp.user_app_day_file+".csv", index=False) + serialize.save_pickle(df, MasterRawUserDayApp.user_app_day_file) + + codebook_dict = pd.read_excel(codebook.manual_specs_path, index_col="VariableName", + sheet_name="UserAppDay").to_dict(orient='index') + Stata().general_exporter(clean_master_df=df, + cb_dict=codebook_dict, + level_name="UserAppDay", + is_wide=False) + + # Export Codebook to Final Folder + pd.DataFrame.from_dict(codebook_dict, orient='index').to_csv(MasterRawUserDayApp.uad_final_codebook) + + print(f"\n Saved UAD! {datetime.now()}") + + @staticmethod + def _mem_redux(df): + for col in df.columns: + if col in ["AppCode","App"]: + df[col] = df[col].astype('category') + + elif df[col].dtype != object: + try: + df[col] = df[col].astype(np.int32) + except: + print(f"could not convert {col} to int32") + + #df = df.set_index("AppCode") + #df_dd = dd.from_pandas(df, npartitions=study_config.cores) + return df + +if __name__ == "__main__": + + config_user_dict = serialize.open_yaml("config_user.yaml") + if config_user_dict['local']['test']: + alt_use = serialize.open_pickle(os.path.join("data","external","intermediate_test","PhoneDashboard","Alternative")) + else: + alt_use = serialize.open_pickle(os.path.join("data", "external", "intermediate", "PhoneDashboard", "Alternative")) + + budget = serialize.open_pickle(os.path.join("data","external", "intermediate", "PhoneDashboard", "Budget")) + snooze = serialize.open_pickle(os.path.join("data","external", "intermediate", "PhoneDashboard", "Snooze")) + status = serialize.open_pickle(os.path.join("data", "external", "intermediate", "PhoneDashboard", "Status")) + + MasterRawUserDayApp.build(alt_use,budget,snooze,status) \ No newline at end of file diff --git a/17/replication_package/code/data/source/build_master/pullers/pull_events_alt.py b/17/replication_package/code/data/source/build_master/pullers/pull_events_alt.py new file mode 100644 index 0000000000000000000000000000000000000000..9a00e7c5cddbb3da3eac55d36c56e00f89b71c89 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/pullers/pull_events_alt.py @@ -0,0 +1,259 @@ +import os +import sys +import pandas as pd +import json +import git +import multiprocessing + + +#importing modules from root of data +root = git.Repo('.', search_parent_directories = True).working_tree_dir +sys.path.append(root) +os.chdir(os.path.join(root)) + +from datetime import datetime, timedelta + +from lib.data_helpers import data_utils +from lib.utilities import serialize +from lib.experiment_specs import study_config +from lib.data_helpers.pull_events import PullEvents + + + +class PullEventsAlt(): + + def __init__(self, + install_dir = os.path.join("data", "external", "input", "PhoneDashboard", "RawAltInstall"), + skip_use = False): + """ + Purpose: Processed data found in a json file found in Alternative data zipfolder. It currently only processes + alternative use data and install data + + install_dir: the directory where processed installation data will be dumped + """ + + self.duplicate_fitsby_apps = pd.read_excel(os.path.join("lib", "experiment_specs", "FITSBY_apps.xlsx")) + self.install_dir = install_dir + self.skip_use = skip_use + + def read_alt(self, path): + """ + input: + - path: the path of the json file to read + - skip_use: skips use data collection if equal to True + + purpose: loads the user json and calls processing functions. + + returns: the user-app-hour data file which will be used to aggregate in memory. The install data is first + serialized on the user level and aggregated later. + """ + day_str = path.split("-alternative-")[-1].replace(".json","") + if ".json" not in path: + return pd.DataFrame() + + with open(path) as json_file: + data = json.load(json_file) + + if self.skip_use == False: + p_data = self.read_alt_use_data(data) + else: + p_data = pd.DataFrame() + + try: + self.read_install_data(data,day_str) + except: + print(f"could not process install data for {path}") + + return p_data + + def read_alt_use_data(self, data): + """gets the alternative use data, housed in the yesterday key (to ensure completeness). note that we submit all data for aggregation, even from + jsons that house data from a couple weeks ago. in the aggregation step, we removed duplicates""" + if "yesterday" not in data["event_details"]: + return pd.DataFrame() + + y_data = data["event_details"]["yesterday"] + start = y_data["start-range"] + y_data.pop("start-range", None) + y_data.pop("end-range", None) + appcode = data["passive-data-metadata"]["source"] + timezone = data["passive-data-metadata"]["timezone"] + hour_df_list = [] + for hour in y_data.keys(): + hour_df = pd.DataFrame.from_dict(y_data[hour], orient='index') + if len(hour_df) > 0: + hour_df = hour_df.reset_index() + hour_df.columns = [x.replace("_", " ").title().replace(" ", "") for x in hour_df.columns] + hour_df["Created"] = start.replace("T00:00", "T" + hour) + hour_df["AppCode"] = appcode + + # UTF Timestamp associated with this usage summary. there may be duplicate date summaries and we will keep the one with the later 'observed' timestamp, + # since it contains a better picture of all use for a given day + hour_df['Observed'] = data['observed'] + + # Remove Duplicate FITSBY App Data + fitsby_like_apps = list(self.duplicate_fitsby_apps["App"]) + hour_df = hour_df.loc[~hour_df["Index"].isin(fitsby_like_apps)] + hour_df_list.append(hour_df) + + if len(hour_df_list) > 0: + hr_data = pd.concat(hour_df_list) + else: + hr_data = pd.DataFrame() + return hr_data + + def read_install_data(self,data,day_str): + """ Process installed apps data from the day key. we serialize data on the appcode level, and serialize later + in builder.build_pd_alt""" + + appcode = data["passive-data-metadata"]["source"] + ymd = day_str.split("-") + + date = datetime(int(ymd[0]),int(ymd[1]),int(ymd[2]),0,0).date() + + raw_installed_path = os.path.join(self.install_dir,f"{appcode}.pickle") + + if os.path.isfile(raw_installed_path): + if os.path.getsize(raw_installed_path)>0: + old_i_df = serialize.open_pickle(raw_installed_path) + + #if the day is already in the raw data for the appcode, continue (install data need not be terribly precise) + #if date in list(old_i_df["Date"].unique()): + # pass + #else: + i_df = PullEventsAlt.process_install_data(data, appcode, date) + new_i_df = old_i_df.append(i_df) + new_i_df = new_i_df.drop_duplicates(subset=["AppCode", "App", "Date"]).reset_index(drop = True) + #serialize.save_hdf(new_i_df,raw_installed_path.replace(".pickle",".h5")) + serialize.save_pickle(new_i_df, raw_installed_path) + + else: + i_df = PullEventsAlt.process_install_data(data, appcode, date) + #serialize.save_hdf(i_df, raw_installed_path.replace(".pickle",".h5")) + serialize.save_pickle(i_df, raw_installed_path) + else: + i_df = PullEventsAlt.process_install_data(data,appcode,date) + #serialize.save_hdf(i_df, raw_installed_path.replace(".pickle",".h5")) + serialize.save_pickle(i_df, raw_installed_path) + + @staticmethod + def process_install_data(data,appcode,date): + # get installed apps and remove duplicates + apps = list(data["event_details"]["day"].keys()) + apps = [x for x in apps if x not in study_config.fitsby] + + i_df = pd.DataFrame({"App": apps}) + i_df["App"] = i_df["App"].astype('category') + i_df["AppCode"] = appcode + i_df["Date"] = date + return i_df + + @staticmethod + def process_raw_use(df, zip_file, event_puller): + """ use cleaning that occurs before the df is merged to the full df, as apart of pull_events.py""" + + df = df.rename(columns = {"UsageMs":"Duration", + "Package":"ForegroundApp", + "Category":"PlayStoreCategory", + "com.audacious_software.phone_dashboard.APP_TIME_BUDGET":"AltEffectiveLimit"}) + df = data_utils.clean_iso_dates(df, 'Created') + + return df + + @staticmethod + def combine_trad_alt(raw_alt, clean_trad): + """adds clean trad data to raw alt data b/c during alt data + cleaning, trad data will fill in alt data where alt is missing. + - We DON'T ADD in data from yesterday b/c it's normal to not have that data """ + + yesterday = datetime.now().date() - timedelta(1) + clean_trad = clean_trad.rename(columns={"UseMinutes": "BackupMinutes"}) + clean_trad = clean_trad.loc[clean_trad["CreatedDate"] < yesterday] + + merge_cols = ["AppCode", "CreatedDatetimeHour", "ForegroundApp"] + clean_trad = clean_trad.rename(columns={"App": "ForegroundApp"}) + df = raw_alt.merge(clean_trad[merge_cols + ["BackupMinutes"]], on=merge_cols, how="outer") + return df + + @staticmethod + def recover_install_data(): + pull_month = "" + schedule = {datetime(2020,8,16,0,0): "04", + datetime(2020,8,17,0,0): "05", + datetime(2020,8,20,0,0): "07"} + + for schedule_date,month in schedule.items(): + if datetime.now().date() == schedule_date.date(): + pull_month = month + + if pull_month != "": + print(f"pulling install data for month {pull_month}") + print(f"\n Begin Process {datetime.now()}") + # recovering install data + alt_json_reader = PullEventsAlt(skip_use=True) + + #initialize a pull events class JUST to access ._open_zipfolders_install + pd_alt_puller = PullEvents(source="PhoneDashboard", + keyword="Alternative", + scratch=False, + test=False, + time_cols=["Created"], + raw_timezone="Local", + appcode_col='AppCode', + identifying_cols=["AppCode", "ForegroundApp", "CreatedDatetimeHour"], + sort_cols=["Observed", "CreatedDatetimeHour"], + drop_cols=["Com.AudaciousSoftware.PhoneDashboard.AppTimeBudget", "Timezone", + "CreatedDatetime", "CreatedEasternDatetime", "Label", "CreatedDate", + "PlayStoreCategory", "DaysObserved", "Index", "ZipFolder", + "CreatedEasternMinusLocalHours"], + cat_cols=["ForegroundApp"], + compress_type="folder", + processing_func=alt_json_reader.process_raw_use, + file_reader=alt_json_reader.read_alt) + + all_zip_files = list(os.listdir(pd_alt_puller.zipped_directory)) + zip_files = [x for x in all_zip_files if (f"2020-{pull_month}" in x) & ("FOLDER" not in x)] + print(len(zip_files)) + + config_user_dict = serialize.open_yaml("config_user.yaml") + if config_user_dict['local']['test'] == True: + zip_files = zip_files[0:6] + + if config_user_dict['local']['parallel'] == True: + # create the pool object for multiprocessing + pool = multiprocessing.Pool(processes=study_config.cores) + + # split the files to add into n lists where n = cores + chunks = [zip_files[i::study_config.cores] for i in range(study_config.cores)] + print(chunks) + print(f"Multiprocessing with {study_config.cores} cpus") + pool.map(func=pd_alt_puller._open_zipfolders_install, iterable=chunks) + pool.close() + print(f"Done Multiprocessing {datetime.now()}") + + else: + pd_alt_puller._open_zipfolders_install(zip_files) + + #######OLD ######### + def process_alt_event_data(self, data,event_key): + appcode = data["passive-data-metadata"]["source"] + raw_appcode_path = os.path.join("data","external","input","PhoneDashboard",f"RawAlt{event_key.capitalize()}",f"{appcode}.pickle") + l = data["event_details"][event_key] + + if os.path.isfile(raw_appcode_path): + old_l = serialize.open_pickle(raw_appcode_path,df_bool=False) + incoming_keys = list(l.keys()) + old_keys = list(old_l.keys()) + new_events = [x for x in incoming_keys if x not in old_keys] + + #update the old_l dictionary + for new_event in new_events: + old_l[new_event] = l[new_event].copy() + serialize.save_pickle(old_l, raw_appcode_path, df_bool=False) + else: + print("STILL CANT FIND APPCODE PICKLE") + serialize.save_pickle(l, raw_appcode_path, df_bool=False) + + +if __name__ == "__main__": + PullEventsAlt.recover_install_data() diff --git a/17/replication_package/code/data/source/build_master/pullers/pull_events_use.py b/17/replication_package/code/data/source/build_master/pullers/pull_events_use.py new file mode 100644 index 0000000000000000000000000000000000000000..77c7e3b66d555034d4608bf25914c1ad4ead7cb2 --- /dev/null +++ b/17/replication_package/code/data/source/build_master/pullers/pull_events_use.py @@ -0,0 +1,108 @@ +import os +import pandas as pd + +from datetime import datetime +from lib.data_helpers import data_utils +from lib.data_helpers.gaming import Gaming +from lib.utilities import serialize + +#TODO: update this description + +class PullEventsUse(): + input_dir = os.path.join("data","external","input","PhoneDashboard") + agg_file = os.path.join(input_dir, "RawUse") + problem_folder = os.path.join(input_dir, "ProblemRaw") + raw_use_zipped_directory = os.path.join(input_dir, "RawUseZipped") + raw_hour_level_directory = os.path.join(input_dir, "RawHourLevel") + + @staticmethod + def process_raw_use(df, zip_file, event_puller): + df = PullEventsUse._clean_granular_df(df, zip_file) + PullEventsUse._check_granular_df(df, zip_file) + + Gaming.scan(df, zip_file.replace(".zip",""), first_last_bool=False) + Gaming.get_first_last(df, zip_file.replace(".zip","")) + + df = df.groupby(by=['AppCode', + 'ForegroundApp', + 'CreatedDatetimeHour', + 'ScreenActive',], as_index=False)['Duration','Pickups','Checks'].sum() + return df + + @staticmethod + def process_raw_use_indiv(df, zip_file, event_puller): + df = PullEventsUse._clean_granular_df(df, zip_file) + PullEventsUse._check_granular_df(df, zip_file) + + df.loc[(df.PrevForegroundApp != df.ForegroundApp)| + (df.PrevScreenActive != df.ScreenActive), 'StartTime'] = df.CreatedDatetime + df['StartTime'] = df.StartTime.ffill() + + df = df[df.ScreenActive == 1].groupby(by=['AppCode', + 'ForegroundApp', + 'StartTime'], as_index=False)['Duration','Pickups','Checks'].sum() + df["UseMinutes"] = df["Duration"] / (1000 * 60) + df["UseSeconds"] = df["Duration"] / (1000) + df = df.drop(columns = ["Duration"]) + + return df + + """cleans granular df and records problems""" + @staticmethod + def _clean_granular_df(df, zip_file): + # Remove Duplicate App Data and rename problems + duplicate_fitsby_apps = pd.read_excel(os.path.join("lib","experiment_specs","FITSBY_apps.xlsx")) + df = df.merge(duplicate_fitsby_apps, + how='left', + left_on='ForegroundApp', + right_on='App') + + df.loc[~pd.isna(df.Fitsby), 'ForegroundApp'] = df.Fitsby + df = df.drop('Fitsby', 1) + + # Clean dates + df.columns = df.columns.str.replace('Date', '') + df = data_utils.clean_iso_dates(df, 'Recorded') + df = data_utils.clean_iso_dates(df, 'Created') + df.loc[:, "Zipfile"] = zip_file + + df = df.drop_duplicates(subset=['AppCode', 'CreatedDatetime', 'ForegroundApp']) + + # prev vars will be the values of the var from the row above + df["PrevScreenActive"] = df["ScreenActive"].shift(1) + df['Pickups'] = 0 + df.loc[(df['PrevScreenActive'] == 0) & (df["ScreenActive"] == 1), 'Pickups'] = 1 + + df["PrevForegroundApp"] = df["ForegroundApp"].shift(1) + df["Checks"] = df["Pickups"] + df.loc[(df["PrevForegroundApp"] != df["ForegroundApp"]) + & (df['PrevScreenActive'] == 1) & + (df["ScreenActive"] == 1), "Checks"] = 1 + df.loc[df["Checks"].isnull(), "Checks"] = 0 + + return df + + @staticmethod + def _check_granular_df(df, zip_file): + problem_df = pd.DataFrame() + # Check if Created date occured before the recorded date time + test = df.loc[df['CreatedDatetime'] > df['RecordedDatetime']] + if len(test) > 0: + problem_df.append(test) + + # Assert that raw data are sorted by AppCode CreatedDatetime(smallest to largest) + if 'TimeZone' in df.columns: + df["CreatedPrevEasternDatetime"] = df["CreatedEasternDatetime"].shift(1) + df["PrevAppCode"] = df["AppCode"].shift(1) + verify = df.loc[df["PrevAppCode"] == df["AppCode"], + ["AppCode", "PrevAppCode", 'CreatedPrevEasternDatetime', 'CreatedEasternDatetime']] + + test2 = verify.loc[verify['CreatedPrevEasternDatetime'] > verify['CreatedEasternDatetime']] + if len(test2) > 0: + problem_df.append(test2) + + test3 = df.loc[df["CreatedDate"] == datetime.now().date()] + if len(test3) > 0: + problem_df.append(test3) + if len(problem_df) > 0: + serialize.save_pickle(problem_df, os.path.join(PullEventsUse.problem_folder, zip_file.replace(".zip", ".pickle"))) \ No newline at end of file diff --git a/17/replication_package/code/data/source/clean_master/cleaner.py b/17/replication_package/code/data/source/clean_master/cleaner.py new file mode 100644 index 0000000000000000000000000000000000000000..c7381a72fb184a6eb536d876f7de93497a303f32 --- /dev/null +++ b/17/replication_package/code/data/source/clean_master/cleaner.py @@ -0,0 +1,189 @@ +import os +import sys +import pandas as pd +import numpy as np +from datetime import datetime, timedelta + +from lib.experiment_specs import study_config + +from lib.data_helpers import test +from lib.data_helpers import data_utils + +from data.source.clean_master.outcome_variable_cleaners import outcome_cleaner +from data.source.exporters.master_contact_generator import MasterContactGenerator + +from data.source.clean_master.management.baseline_prep import BaselinePrep +from data.source.clean_master.management.midline_prep import MidlinePrep +from data.source.clean_master.management.endline1_prep import Endline1Prep +from data.source.clean_master.management.endline2_prep import Endline2Prep +from data.source.clean_master.management.earnings import Earnings + + +from lib.utilities import serialize + +np.random.seed(12423534) + +""" +cleans the aggregated raw master user level data file by: + - adding treatment/payment variables + - creates outcome variables and indices + - +""" + +class Cleaner(): + used_contact_list_directory = os.path.join("data","external","dropbox_confidential","ContactLists","Used") + master_file = os.path.join("data","external","intermediate", "MasterCleanUser") + master_test_file = os.path.join("data","external","intermediate_test", "MasterCleanUser") + qual_path = os.path.join("data", "external", "dropbox_confidential", "QualitativeFeedback") + + def __init__(self): + self.treatment_cl = pd.DataFrame() + self.used_contact_lists = self._import_used_contact_lists() + self.config_user_dict = serialize.open_yaml("config_user.yaml") + self.survey_prep_functions = {"Baseline":BaselinePrep.main, + "Midline":MidlinePrep.main, + "Endline1":Endline1Prep.main, + "Endline2":Endline2Prep.main} + #add filler surveys + for survey in study_config.surveys.keys(): + if "Phase" in survey: + self.survey_prep_functions[survey] = Endline2Prep.filler + + def clean_master(self,raw_master_df): + df = self._prepare_proper_sample(raw_master_df) + + """Prepare Outcome Variables""" + df = self.ingest_qual_data("PDBug", df) + df = outcome_cleaner.clean_outcome_vars(df) + + + """Prepare Embedded Data for Upcoming Surveys or Ingest Embedded Data from Used CLs""" + for phase_name, chars in study_config.phases.items(): + start_survey = chars["StartSurvey"]["Name"] + end_survey = chars["EndSurvey"]["Name"] + + if (datetime.now() < study_config.surveys[start_survey]["Start"]+ timedelta(3)): + print(f"\n No action for {end_survey} Randomization, {phase_name} isn't 3 days in") + continue + + if (datetime.now() > study_config.surveys[start_survey]["Start"] + timedelta(3)) & (datetime.now() < study_config.surveys[end_survey]["Start"]): + print(f"\n Prepping {end_survey} Randomization") + df = self.survey_prep_functions[end_survey](df) + + else: + if end_survey in study_config.filler_surveys: + continue + + elif end_survey not in self.used_contact_lists: + print(f"{end_survey} CL needs to be in used CL!! Need used treatment assignments") + sys.exit() + + else: + print(f"\n Adding embedded data on {end_survey} using CL, since {phase_name} is over") + df = self._add_cl_data(df,end_survey) + + """Calculate Earnings""" + df = Earnings().create_payment_vars(df) + + self.sanity_checks(df) + + if self.config_user_dict['local']['test']: + test.save_test_df(df,self.master_test_file) + + else: + test.select_test_appcodes(df) + serialize.save_pickle(df, self.master_file) + df_str = df.copy().astype(str).applymap(lambda x: x.strip().replace("\n", "").replace('"', '')) + df_str.to_csv(self.master_file+".csv", index = False) + + #seed_file = os.path.join("data","external","intermediate","Scratch", + # "BalanceChecks",f"MasterCleanUser{study_config.seed}.csv") + #df_str.to_csv(seed_file, index=False) + + return df + + """import used contact lists""" + def _import_used_contact_lists(self): + contact_lists = {} + for survey, cl_name in study_config.used_contact_lists.items(): + contact_lists[survey] = MasterContactGenerator.read_in_used_cl(cl_name,survey) + return contact_lists + + + def _prepare_proper_sample(self, df): + """ + This method crops the raw_master_user df to folks that attempted to complete registration + The method also asserts that each row is identified by a unique appcode + + # We want to keep people that never downloaded the app but ATTEMPTED TO COMPLETE registration for attrition analysis + # Attempted to keep registration means, they saw the consent form, and clicked continue, though they may not + # have downloaded the app. + + """ + + initial_code = study_config.surveys[study_config.initial_master_survey]["Code"] + df = df.loc[df[f"{initial_code}_Complete"] != "nan"].dropna( + subset=[f"{initial_code}_Complete"]) + + # Reverse Order of DF so complete appear at top + df = df.iloc[::-1].reset_index(drop=True) + + if study_config.surveys[study_config.initial_master_survey]["End"] < datetime.now(): + try: + assert len(df) == study_config.sample_size + except: + print(f"length of df ( {len(df)}) not same size as study_config.sample_size: {study_config.sample_size}") + sys.exit() + appcode_series = df.loc[df["AppCode"].notnull(), 'AppCode'] + assert (len(appcode_series) == len(appcode_series.unique())) + return df + + def ingest_qual_data(self, survey, df): + file = study_config.qualitative_feedback_files[survey] + code = study_config.surveys[survey]["Code"] + q = pd.read_csv(os.path.join(self.qual_path, file)) + q = data_utils.add_A_to_appcode(q, "AppCode") + pii_cols = sum([x for x in study_config.id_cols.values()], []) + for col in q.columns: + if col in pii_cols + ["RecipientEmail"]: + q = q.drop(columns=[col]) + elif col in study_config.main_cols+study_config.embedded_main_cols: + continue + else: + q = q.rename(columns={col: code + "_" + col}) + q = q.loc[(~q.duplicated(subset=["AppCode"], keep='last'))] + + new_cols = ["AppCode"] + list(set(q.columns) - set(df.columns)) + print(new_cols) + df = df.merge(q[new_cols], how='left', on='AppCode') + return df + + def _add_cl_data(self,df,survey): + """Override all the treatment columns created, and insert those created in the used contact list + Also Add Used CL avg daily use data""" + + old_phase = study_config.surveys[survey]["OldPhase"] + prev_code = study_config.phases[old_phase]["StartSurvey"]["Code"] + + cl = self.used_contact_lists[survey] + cl = cl.rename(columns={"PastActual": f"{prev_code}_Cl{study_config.use_var}"}) + cl[f"{prev_code}_Cl{study_config.use_var}"] = pd.to_numeric(cl[f"{prev_code}_Cl{study_config.use_var}"], errors = 'coerce') + + # only keep prefixed columns (i.e. have a "_") that are not in the main df or main cols not in df + cl_vars_to_merge = ["AppCode"] + [x for x in cl.columns.values if ((x not in df.columns) & ("_" in x)) | + ((x not in df.columns) & (x in study_config.embedded_main_cols))] + print(f"\t {cl_vars_to_merge}") + df = df.merge(cl[cl_vars_to_merge], how ='left',on = "AppCode") + return df + + """check that used contact list column values match the recreation the clean_master master""" + def sanity_checks(self,df): + # Assert no obs were dropped in cleaning + if study_config.surveys["Baseline"]["End"] < datetime.now(): + if len(df) != study_config.sample_size: + print(f"CleanMaster (len = {len(df)}) is not same size a hard coded sample size ({study_config.sample_size})") + sys.exit() + appcode_series = df.loc[df["AppCode"].notnull(), 'AppCode'] + assert (len(appcode_series) == len(appcode_series.unique())) + + diff --git a/17/replication_package/code/data/source/clean_master/management/baseline_prep.py b/17/replication_package/code/data/source/clean_master/management/baseline_prep.py new file mode 100644 index 0000000000000000000000000000000000000000..8b509eafab35d22425e5f635c86062025bf9d1c4 --- /dev/null +++ b/17/replication_package/code/data/source/clean_master/management/baseline_prep.py @@ -0,0 +1,32 @@ + +from lib.data_helpers.treatment import Treatment + +class BaselinePrep: + # probability of being assigned certain rows in the baseline MPL + b_mpl = {"1": 0.999, + "6": 0.001} + + @staticmethod + def main(df): + df = BaselinePrep._get_baseline_eligible(df) + df = BaselinePrep._baseline_treatment(df) + return df + + @staticmethod + def _get_baseline_eligible(df): + df.loc[(df["ET_Complete"] == "Complete") & (df["ET_Q1"] == "Yes, I'm in!"), "B_Eligible"] = "Yes" + df.loc[df["B_Eligible"].isnull(), "B_Eligible"] = "No" + return df + + @staticmethod + def _baseline_treatment(df_master): + df = df_master.loc[df_master["B_Eligible"] == "Yes"] + df = Treatment.assign_treat_var(df=df, + rand_dict=BaselinePrep.b_mpl, + stratum_cols=["R_Complete"], + varname="B_MPLOptionAValue") + df.loc[df["B_MPLOptionAValue"] == "1", "B_RowNumber"] = "7" + df.loc[df["B_MPLOptionAValue"] == "6", "B_RowNumber"] = "5" + treat_vars = list(set(df.columns) - set(df_master.columns)) + df_master = df_master.merge(df[["AppCode"] + treat_vars], how='outer', left_on="AppCode", right_on="AppCode") + return df_master \ No newline at end of file diff --git a/17/replication_package/code/data/source/clean_master/management/earnings.py b/17/replication_package/code/data/source/clean_master/management/earnings.py new file mode 100644 index 0000000000000000000000000000000000000000..b6072db4dbf0445ffbf41358b34a349f024378b6 --- /dev/null +++ b/17/replication_package/code/data/source/clean_master/management/earnings.py @@ -0,0 +1,398 @@ +import pandas as pd +import numpy as np +import os +import random +import math +from datetime import datetime, timedelta +from lib.experiment_specs import study_config +from lib.utilities import serialize +from lib.data_helpers import data_utils +from lib.utilities import codebook + +class Earnings(): + """Calculates how much each person should be compensated AFTER Treatment Assignemnt + and AFTER Survey results are in""" + baseline_complete_pay = 5 + survey_complete_pay = 25 + + def create_payment_vars(self,df): + random.seed(study_config.seed) + + + """ each function creates a set of payment columns that are added up, in total payment""" + df = self._baseline_payment(df) + df = self._prediction_payment(df) + df = self._treatment_payment(df) + df = self._friend_payment(df) + df = self._participation_lottery(df) + df = self._blocker_earnings(df) + df = self._extra_incentive(df) + df = self._total_payment(df) + df = self._create_endline_message(df) + return df + + def _baseline_payment(self,df): + #if "R_Complete" in df.columns: + # df.loc[df["B_Complete"] == "Complete", "BaselinePayment"] = df.loc[df["B_Complete"] == "Complete", "InitialPaymentPart1"] + return df + + def _treatment_payment(self,df): + """ + + Parameters + ---------- + df - master file + + Returns master file with treatment payment variables + ------- + + M_RSIStatus determines who is eligble for payment in the first treatment phase: folks in the immediate bonus treatment group, + who also chose option B in the MPL + + E1_RSIStatus determines who is eligible for payment in the second treatment phase: all folks in the delayed bonus treatment group + """ + # create endline rsi status var b/c it wasn't created as embedded data + df.loc[df["BonusTreatment"] == "Delayed", "E1_"+study_config.rsi_var] = 1 + df.loc[(df["BonusTreatment"] != "Delayed") & (df["M_Complete"] == "Complete"), "E1_"+study_config.rsi_var] = 0 + + #if after endline administration, use the actual use from the endline survey + for survey,bonus_group in {"Midline":"Immediate","Endline1":"Delayed"}.items(): + if datetime.now()> study_config.surveys[survey]["End"]: + code = study_config.surveys[survey]["Code"] + earn_col = f"{code}_{study_config.use_var}" + + df.loc[(df["E2_Complete"] == "Complete") & (df["BonusTreatment"] == bonus_group) & + (df[f"{code}_RSIStatus"].astype(float) == 1), + f"{code}_RSIEarnings"] = (df["HourlyRate"].astype(float) * (df["Benchmark"] - df[earn_col]/60)).apply(lambda x: round(max(0, x), 2)) + + if datetime.now()> study_config.surveys["Endline1"]["End"]: + df["RSIEarnings"] = df["M_RSIEarnings"].fillna(0) + df["E1_RSIEarnings"].fillna(0) + + elif datetime.now()> study_config.surveys["Midline"]["End"]: + df["RSIEarnings"] = df["M_RSIEarnings"].fillna(0) + + df["RSIEarnings"] = df["RSIEarnings"].fillna(0).apply(lambda x: min(150,x)) + + return df + + def _prediction_payment(self,df): + """ + + generates column containing payments for all endline completes + """ + predict_question_dict = self._create_predict_question_dict(df) + if datetime.now() > study_config.surveys["Endline2"]["End"] + timedelta(5): + predict_q_count = len(predict_question_dict) + predict_questions = list(predict_question_dict.keys()) + for q in predict_questions: + df[q] = pd.to_numeric(df[q], errors='coerce') + + df["PredictQInt"] = np.random.randint(0, predict_q_count, len(df)) + df.loc[df["E2_Complete"] == "Complete", "PredictUseQuestion"] = df.loc[df["E2_Complete"] == "Complete", + "PredictQInt"].apply( + lambda x: predict_questions[x]) + df.loc[df["E2_Complete"] == "Complete", "ActualUseQuestion"] = df.loc[df["E2_Complete"] == "Complete", + "PredictUseQuestion"].apply( + lambda x: predict_question_dict[x]) + + df["PredictDiffMinutes"] = np.nan + for index, row in df.iterrows(): + if row["E2_Complete"] != "Complete": + continue + else: + diff = abs(float(row[row["PredictUseQuestion"]]) - float(row[row["ActualUseQuestion"]])) + df.loc[index, "PredictDiffMinutes"] = diff + df.loc[df["PredictDiffMinutes"] < 16, "PredictActualReward"] = df["PredictReward"].astype(float) + else: + df["PredictActualReward"] = np.nan + + return df + + def _create_predict_question_dict(self,df): + predict_question_dict = {} + phase_list = list(study_config.phases.keys()) + + i = 0 + for phase in phase_list: + chars = study_config.phases[phase] + survey = chars["StartSurvey"]["Name"] + if survey in ["Midline", "Endline1", "Endline2"]: + code = study_config.surveys[survey]["Code"] + predict_questions = [x for x in df.columns if f"{code}_PredictUseNext" in x] + for q in predict_questions: + predict_time = q[-1] + predict_phase_index = i + int(float(predict_time)) - 1 + predict_phase = phase_list[predict_phase_index] + predict_code = study_config.phases[predict_phase]["StartSurvey"]["Code"] + predict_question_dict[q] = f"{predict_code}_FITSBYUseMinutes" + i += 1 + return predict_question_dict + + def _friend_payment(self, df): + """ + - create a list of participant pairs where one referred the other + - each pair enters the lottery if both of them have E2_Complete == Complete + - draw one pair at random, and both participants receive $100 + """ + print("\n conduct friend payment lottery") + conf_cols = ["PhoneNumber", "FriendContact"] + pii_path = os.path.join("data", "external", "dropbox_confidential", "ContactLists", "Generator", "PII") + pii = serialize.open_pickle(pii_path).reset_index().rename(columns={"index": "AppCode"}) + df = df.drop(columns=["PhoneNumber", "B_FriendContact"]) + df = df.merge(pii[["AppCode"] + conf_cols], on="AppCode", how='left') + df['ReferLotteryEligible'] = np.nan + + # Create friend phone number sets (the variable B_FriendContact contains the friend referral number) + df_refer = df.loc[df["FriendContact"].notnull()] + refer_dict = dict(zip(df["PhoneNumber"], df["FriendContact"])) + refer_set = [] + for k, v in refer_dict.items(): + pair = {k, v} + refer_set.append(pair) + + # Create a list of phone numbers that completed the study + phone_complete_list = list(df.loc[df["E2_Complete"] == "Complete", "PhoneNumber"]) + + # find eligible pairs + eligible_pairs = [] + for pair in refer_set: + eligible = True + for number in pair: + if number not in phone_complete_list: + eligible = False + + if eligible == True & (pair not in eligible_pairs): + eligible_pairs.append(pair) + + # Randomly Select A Winning Pair + random.shuffle(eligible_pairs) + winning_pair = eligible_pairs[0] + print(len(eligible_pairs)) + print(f"Winning friend pair: {winning_pair}") + for number in winning_pair: + df.loc[df["PhoneNumber"] == number, "FriendPayment"] = 100 + + df = df.drop(columns=conf_cols) + return df + + def _participation_lottery(self, df): + """ + pick two appcodes from eligible codes to win 500 + """ + # compile eligible codes: endline completes and extra incentive codes from + # participants who were promised the lottery when they were dropped before the midline + extra_lot = pd.read_csv(os.path.join("data", "external", "dropbox_confidential", "ContactLists", "Used", + "Extra incentive", "ExtraLottery07062020.csv")) + extra_lot = data_utils.add_A_to_appcode(extra_lot, "AppCode") + extra_appcodes = set(list(extra_lot["AppCode"])) + complete_codes = set(df.loc[df["E2_Complete"] == "Complete", "AppCode"]) + lottery_codes = list(complete_codes.union(extra_appcodes)) + lottery_codes.sort() + + # shuffle and pick two appcodes + print(lottery_codes[0:5]) + random.shuffle(lottery_codes) + winning_codes = lottery_codes[0:2] + print(f"winning participation lottery {winning_codes}") + + for appcode in winning_codes: + df.loc[df["AppCode"] == appcode, "ParticipationLotteryPayment"] = 500 + + return df + + def _blocker_earnings(self,df): + """pay BlockerEarnings if a participant has EndlineBlockerChange==1 + (note that BlockerEarnings is an embedded variable Lena created in endline 1 """ + + df.loc[(df["EndlineBlockerChange"] == 1) & + (df["E2_Complete"] == "Complete"), "BlockerPayment"] = df.loc[(df["EndlineBlockerChange"] == 1) & + (df["E2_Complete"] == "Complete"), "E1_BlockerEarnings"].astype(float) + return df + + def _extra_incentive(self, df): + """we sent various extra incentives to participants if they used pd through phase 5. + I've cleaned it up and summarized all our offers in the + Summary tab of Extra incentive summary in Extra incentive summary in Extra incentive summary""" + + extra_inc = pd.read_excel(os.path.join("data", "external", "dropbox_confidential", "ContactLists", "Used", + "Extra incentive", "Extra incentive summary.xlsx")) + extra_inc = data_utils.add_A_to_appcode(extra_inc, "AppCode") + extra_inc = extra_inc.rename(columns={"Amount": "ProposedExtraIncentive"}) + df = df.merge(extra_inc.drop_duplicates(subset = ["AppCode"]), on="AppCode", how='left') + df.loc[df["P5_UseMinutes"] > 0, "ExtraIncentive"] = df.loc[df["P5_UseMinutes"] > 0, "ProposedExtraIncentive"] + print(df["ExtraIncentive"].value_counts()) + return df + + """show actual total payment after endline ends, or show hypthetical total payment before endline ends""" + def _total_payment(self, df): + + # first add the study completion payments + #df["TotalPay"] = df["RSIEarnings"].fillna(0) + \ + # df["PredictActualReward"].fillna(0) + \ + # df["BaselinePayment"].fillna(0) + + df["AdditionalPay"] = df["PredictActualReward"].fillna(0) +\ + df["RSIEarnings"].fillna(0) +\ + df["FriendPayment"].fillna(0) +\ + df["ParticipationLotteryPayment"].fillna(0) +\ + df["BlockerPayment"].fillna(0) +\ + df["ExtraIncentive"].fillna(0) +\ + df["InitialPaymentPart2"].astype(float).fillna(0) + + df['AdditionalPay'] = df['AdditionalPay'].round(2) + + #then add the study completion payments + if "E2_Complete" not in df.columns: + #df["TotalPay"] = df["TotalPay"].apply(lambda x: x + self.survey_complete_pay) + df["AdditionalPay"] = df["AdditionalPay"].apply(lambda x: x + self.survey_complete_pay) + + else: + df.loc[df["E2_Complete"] == "Complete", "CompletionPayment"] = self.survey_complete_pay + df["AdditionalPay"] = df["AdditionalPay"] + df["CompletionPayment"].fillna(0) + + df.loc[df["E2_Complete"] == "Complete", ["AppCode", "RevealConfirm","E2_Complete", + "PredictQInt", "PredictUseQuestion", "ActualUseQuestion", + "PredictDiffMinutes", "PredictReward", "PredictActualReward", + "BonusTreatment", "Benchmark", "HourlyRate", "B_FITSBYUseMinutes", + "M_DaysWithUse","M_FITSBYUseMinutes", "M_RSIStatus", "M_RSIEarnings", + "E1_DaysWithUse", "E1_FITSBYUseMinutes", "E1_RSIStatus", "E1_RSIEarnings", + "RSIEarnings", + "FriendPayment","ParticipationLotteryPayment", + "E1_BlockerEarnings","BlockerPayment", + "ProposedExtraIncentive","ExtraIncentive", + "CompletionPayment","InitialPaymentPart2","AdditionalPay" + ]].to_csv(os.path.join("data", "external", "intermediate", "Scratch", "Earns.csv")) + + + return df + + def _create_endline_message(self,df): + df["OverallMessage"] = "" + for index, row in df.iterrows(): + message = "" + if float(row["AdditionalPay"]) > 0: + message = f"This total of ${round(row['AdditionalPay'],2)} includes: " + + if row["E2_Complete"] == "Complete": + message = message + f"""${row['CompletionPayment']} for completing the study """ + + if math.isnan(row["PredictActualReward"])== False: + message = message + f""", ${row['PredictActualReward']} for predicting your use accurately """ + + if float(row['RSIEarnings'])>0: + message = message + f""", ${round(row['RSIEarnings'],2)} from the Screen Time Bonus """ + + if float(row['FriendPayment'])>0: + message = message + f""", ${row['FriendPayment']} for winning the friend referral lottery """ + + if float(row['ParticipationLotteryPayment'])>0: + message = message + f""", ${row['ParticipationLotteryPayment']} for winning the participant lottery """ + + if float(row['BlockerPayment'])>0: + message = message + f""", ${row['BlockerPayment']} from the loss of your limit functions """ + + if float(row['ExtraIncentive'])>0: + message = message + f""", ${row['ExtraIncentive']} for earning an extra incentive """ + + if float(row['InitialPaymentPart2'])>0: + message = message + f""", ${row['InitialPaymentPart2']} for earning an additional initial payment after the first survey """ + df.loc[index, "OverallMessage"] = message + return df + + + +################################################## + """ endline messages""" + def old_create_endline_messages(self,df): + df["OverallMessage"] = "" + df["Phase1Message"] = "" + df["Phase2Message"] = "" + df["PredictionMessage"] = "" + + + for index, row in df.iterrows(): + if row[f"M2_ActualUse"]== float('nan') or row["M2_Complete"] != "Complete": + continue + + for code,range in [("M1","April 22 to May 11"),("M2","May 13 to June 1")]: + code_number = str(code[1]) + + if row[f"{code}_MPLRow"] != 0 or row[f"{code}_BlockerType"]=="costly_snooze": + + df.loc[index,"OverallMessage"] = df.loc[index,"OverallMessage"] + f"""${row[f"{code}_PhasePay"]} + for the phase from {range}. """ + + #df.loc[index,f"Phase{code_number}Message"] = f"{range}: " + + #if RSI + if int(row[f"{code}_RSI"]) == 1: + if row[f"{code}_RSIEarnings"] > 0: + compare = "less" + else: + compare = "greater" + + df.loc[index,f"Phase{code_number}Message"] = f""" + ${row[f"{code}_RSIEarnings"]} from the Reduced Screen Time Incentive because + your average daily phone use of {row[f"{code}_ActualUse"]} hours was {compare} + than your benchmark of {row[f"{code}_Benchmark"]} hours; """ + + #else FixedPayment + elif int(row[f"{code}_MPLRow"]) != 0: + df.loc[index,f"Phase{code_number}Message"] = f"""{range}: ${row[f"{code}_OptionAEarnings"]} for your fixed payment; """ + + #if CostlySnooze + if row[f"{code}_BlockerType"] == "costly_snooze": + df.loc[index,f"Phase{code_number}Message"] = df.loc[index,f"Phase{code_number}Message"] +\ + f""" ${row[f"{code}_RemainingSnoozeBudget"]} of remaining snooze balance; """ + + # Add prediction reward text + if math.isnan(row["PredictActualReward"])== False: + df.loc[index, "OverallMessage"] = df.loc[index, "OverallMessage"] + \ + f"""${row[f"PredictActualReward"]} for the Prediction Reward. """ + + df.loc[index,"PredictionMessage"] = f"""Your prediction on Survey + {row["SurveyNumber"]} was within 30 minutes of your actual use as measured by Phone Dashboard.""" + + for col in ["OverallMessage","Phase1Message","Phase2Message","PredictionMessage"]: + df[col] = df[col].replace("\t","").replace(" "," ") + return df + + + + def create_phase_earnings_message(self, df, phase): + specs = study_config.phases[phase] + old_survey = specs["StartSurvey"] + old_code = study_config.surveys[old_survey]['Code'] + df[f"{old_code}_EarningsMessage"] = \ + f"""${self.survey_complete_pay} + for completing the surveys and for continuing to use phone dashboard; ${self.baseline_complete_pay} + for completing the baseline. + """ + for index, row in df.iterrows(): + if row[f"{old_code}_ActualUse"] == float('nan'): + continue + + if row[f"{old_code}_ReceivesSubsidy"] == 1 or row[f"{old_code}_BlockerType"] == "costly_snooze": + df.loc[index, f"{old_code}_EarningsMessage"] = df.loc[index, f"{old_code}_EarningsMessage"] + \ + " From the past phase you will earn the following: " + + if row[f"{old_code}_BlockerType"] == "costly_snooze": + df.loc[index, f"{old_code}_EarningsMessage"] = df.loc[index, f"{old_code}_EarningsMessage"] + \ + " your remaining snooze budget in the past phase;" + + if row[f"{old_code}_RSIEarnings"] == 0: + df.loc[index, f"{old_code}_EarningsMessage"] = df.loc[index, f"{old_code}_EarningsMessage"] + \ + f"""$0 from the Reduced Screen Time Incentive because + your average daily phone use of {row[f"{old_code}_ActualUse"]} hours + was greater than your benchmark of {row[f"{old_code}_Benchmark"]} hours.""" + + if row[f"{old_code}_RSIEarnings"] > 0: + df.loc[index, f"{old_code}_EarningsMessage"] = df.loc[index, f"{old_code}_EarningsMessage"] + \ + f"""${row[ + f"{old_code}_RSIEarnings"]} from the Reduced Screen Time Incentive because + for every hour you kept your phone below your benchmark, you earned ${row[f"{old_code}_HourlyRate"]}. + You used your phone for {row[f"{old_code}_ActualUse"]} hours per day, + {row[f"{old_code}_Benchmark"] - row[f"{old_code}_ActualUse"]} hours lower + than your benchmark of {row[f"{old_code}_Benchmark"]} hours. """ + + return df \ No newline at end of file diff --git a/17/replication_package/code/data/source/clean_master/management/endline1_prep.py b/17/replication_package/code/data/source/clean_master/management/endline1_prep.py new file mode 100644 index 0000000000000000000000000000000000000000..0ca7394af2ae319a81d1b52d72db830f141537a2 --- /dev/null +++ b/17/replication_package/code/data/source/clean_master/management/endline1_prep.py @@ -0,0 +1,10 @@ + +class Endline1Prep: + + @staticmethod + def main(df): + df["MaxEarnings"] = df["Benchmark"].apply(lambda x: min(x*50,150)) + + for var in ["BlockerTreatment","BlockerType","SnoozeDelay"]: + df["E1_"+var] = df["M_"+var].copy() + return df \ No newline at end of file diff --git a/17/replication_package/code/data/source/clean_master/management/endline2_prep.py b/17/replication_package/code/data/source/clean_master/management/endline2_prep.py new file mode 100644 index 0000000000000000000000000000000000000000..3f30fc03d2a5351bfaabbfe23339b7b736e22d88 --- /dev/null +++ b/17/replication_package/code/data/source/clean_master/management/endline2_prep.py @@ -0,0 +1,10 @@ + +class Endline2Prep: + + @staticmethod + def main(df): + return df + + @staticmethod + def filler(df): + return df \ No newline at end of file diff --git a/17/replication_package/code/data/source/clean_master/management/midline_prep.py b/17/replication_package/code/data/source/clean_master/management/midline_prep.py new file mode 100644 index 0000000000000000000000000000000000000000..e6fbd7ffef73a42e13033ffb71ddc7d8dcc15439 --- /dev/null +++ b/17/replication_package/code/data/source/clean_master/management/midline_prep.py @@ -0,0 +1,324 @@ + +import os +import math +import pytz +from dateutil import parser +import pandas as pd + +import numpy as np +from lib.experiment_specs import study_config +from lib.utilities import serialize + +from lib.experiment_specs import varsets + +from lib.data_helpers.treatment import Treatment +from data.source.exporters.master_contact_generator import MasterContactGenerator + +class MidlinePrep: + + """ + Treatment Assignment Algorithm + 1. Subset People to randomize: related to consistent phone use, and good phone model etc + + 2. For 99.8% of the Randomized Group: 25% will get BonusTreatment == "Delay" and + 75% will get BonusTreatment == "None". We will stratify the 99.8% by 3 continuous vars, resulting in 8 + stratification groups. + a. Within bonus treatment X stratification groups (2*8 = 16 groups), assign these 6 limit treatment groups: + 40% assigned to Control, 12% each assigned to Snooze 0, 2, 5, 20, and No Snooze + b. Within Stratification, assign Prediction Rewards (1/3 to each bin) + + 3. For the other 0.2% of the Randomize Group, they receive BonusTreatment == "Immediate". These folks will then get + the following assignments. (the stratification variables will be empty for this group): + a. Half will receive BlockerTreatment == NoBlocker, half will receive Instant Snooze + b. Assign prediction rewards in same manner (1/3 in each bin). + + 5. Assign M_MPLRow based on bonus treatment condition + a. BonusTreatment == "Delayed", then M_MPLRow = 14 + b. BonusTreatment == "None", then M_MPLRow = 0 + c. BonusTreatment == "Immediate", then M_MPLRow is randomly assigned + + 6. Assign E_BlockerMPLRow based on bonus treatment condition + a. BonusTreatment == "Immediate" & BlockerTreatment != NoBlocker, then E_BlockerMPLRow is randint(0,11) & EndlineBlockerChange == 1 + b. BonusTreatment != "Immediate" & BlockerTreatment != NoBlocker, then E_BlockerMPLRow is 11 & EndlineBlockerChange == 0 + c. BlockerTreatment == NoBlocker, then E_BlockerMPLRow is 0 & EndlineBlockerChange == 0 + """ + + special = {"1": 0.002, + "0": 0.998} + + ################## + #distributions for the 99.8% (step 2) + ############### + + #list of continuous stratificaitons + continuous_strat = ["B_FITSBYUseMinutes", "StratAddictionIndex", "StratRestrictionIndex"] + + # bonus treatment distribution for points 3 above + bonus_treatment = {"Delayed": 1/4, + "None": 3/4} + + + # Blocker Assignment within each strat X incentive + blocker = {"NoBlocker": 0.4, + "InstantSnooze": 0.12, + "DelayedSnooze": 0.36, + "NoSnooze": 0.12} + + ############# + # Stratify For the DelaySnooze Participants + delay = {"2": 1 / 3, + "5": 1 / 3, + "20": 1 / 3} + + ################## + # distributions for the 0.2 + ############### + # Blocker Assignment within each strat X incentive within point 2 above + blocker_special = {"NoBlocker": 1/2, + "NoSnooze": 1/2} + + ######### + # other specs + ####### + + # stratify for the 99.8% and don't stratify for the special + predict = { + "1" : 1/2, + "5": 1/2 + } + + # Variable Specifications for Django Config: key is the BlockerTreatment, and values are BlockerType and Snooze Delay Values + blocker_spec_dict = {"NoBlocker": + {"BlockerType": "none", + "SnoozeDelay": -1}, + + "NoSnooze": + {"BlockerType": "no_snooze", + "SnoozeDelay": -1}, + + "InstantSnooze": + {"BlockerType": "free_snooze", + "SnoozeDelay": 0}, + + # the SnoozeDelay values will be stratified by self.Delay + "DelayedSnooze": + {"BlockerType": "free_snooze", + "SnoozeDelay": 1} + } + + @staticmethod + def main(df_master): + len_master = len(df_master) + df_master = MidlinePrep._get_midline_eligible(df_master) + + ###### + #Create Randomized Variabels for all Randomized Poeple + ###### + r = df_master.loc[df_master["Randomize"]=="Yes"] + + # there isn't really a stratum col-- everyone has same Randomize value + r = Treatment(seed = study_config.seed).assign_treat_var(r, + rand_dict = MidlinePrep.special, + stratum_cols = ["Randomize"], + varname = "Special") + + df_normal = MidlinePrep._midline_normal(r.loc[r["Special"] == "0"]) + + df_special = MidlinePrep._midline_special(r.loc[r["Special"] == "1"]) + + #### + #fill in treatment variables for non randomized people + #### + nr = df_master.loc[df_master["Randomize"]=="No"] + nr = MidlinePrep._midline_nonrandom_treatment(nr) + + ### + # Add additional cols and Merge back to master + ## + df_treat = pd.concat([df_normal,df_special,nr]) + df_treat = MidlinePrep._add_other_treatment_vars(df_treat) + + new_cols = list(set(df_treat.columns) - set(df_master.columns)) + df_master = df_master.merge(df_treat[["AppCode"]+new_cols], on ="AppCode", how = 'outer') + assert len(df_master) == len_master + + return df_master + + @staticmethod + def _get_midline_eligible(df): + # Get Old AppCodes (generated by look who had use before 3/22 using uad data) + regenerate_old_appcodes = False + if regenerate_old_appcodes == True: + codes_df = MasterContactGenerator.read_in_used_cl("ClaimedPDAppCodes 20200409.csv") + codes_df["CreatedDatetime"] = codes_df["Created"].apply(lambda x: parser.parse(x)) + old_appcodes = list( + codes_df.loc[codes_df["CreatedDatetime"] < pytz.utc.localize(study_config.first_pull), "AppCode"]) + old_appcode_dict = {"Old": old_appcodes} + serialize.save_pickle(old_appcode_dict, + os.path.join("data", "external", "intermediate", "OldAppCodes"), + df_bool=False) + + old_appcodes = serialize.open_pickle(os.path.join("data", "external", "intermediate", "OldAppCodes"), + df_bool=False) + df.loc[df["AppCode"].isin(old_appcodes["Old"]), "OldAppCode"] = True + + # Get Bad Phones + df["PhoneModelUnformat"] = df["PhoneModel"].apply(lambda x: str(x).lower().replace(" ", "").replace("-", "")) + df.loc[(df["PhoneModelUnformat"].str.contains("oneplus")) | + (df["PhoneModelUnformat"].str.contains("xiaomi")), "BadPhone"] = True + df.loc[df["BadPhone"].isnull(), "BadPhone"] = False + + # InitialDrop + df.loc[(df["BadPhone"] == True) | + (df["B_QualityCheck"] == "I will not provide my best answers.") | + (df["B_QualityCheck"] == "I can't promise either way.") | + (df["B_OtherBlockerUse"] == "Yes"), "B_PostSurveyDrop"] = "Yes" + df.loc[(df["B_PostSurveyDrop"]!="Yes")&(df["B_Complete"]=="Complete"),"B_PostSurveyDrop"]="No" + + #Get Midline Eligible + df.loc[(df["B_Complete"] == "Complete") & + (df["B_UseMinutes"].notnull()) & + (df["B_PostSurveyDrop"] != "Yes") & + ((df["PD_Severity"] == "no") | (df["PD_Severity"] == "low")) & + (df["B_TextMissing"] < 3) & + (df["B_PhaseMissingDays"] <= 1) & + (df["B_BlackoutHoursPerDay"] <= 1), "M_Eligible"] = "Yes" + + df.loc[df["B_PostSurveyDrop"]=="Yes","M_Eligible"] = "No - Initial Post B Drop" + df.loc[(df["B_PostSurveyDrop"] != "Yes") & + (df["B_Complete"] == "Complete") & + (df["M_Eligible"].isnull()), "M_Eligible"] = "No - Second Post B Drop" + + #another variable for premidline drop (i.e. doesn't meet non use criteria) + df.loc[(df["B_Complete"] == "Complete") & + (df["B_PostSurveyDrop"] != "Yes") & + ((df["PD_Severity"] == "high") | (df["PD_Severity"] == "NoResponse")|(df["PD_Severity"] == "unfinished")) & + (df["B_TextMissing"] >= 3), "B_PostSurveyDropII"] = "Yes" + + df.loc[(df["B_PostSurveyDrop"] != "Yes") & + (df["B_Complete"] == "Complete") & + (df["B_PostSurveyDropII"] != "Yes"), "B_PostSurveyDropII"] = "No" + + # Determine Randomized Sample + df.loc[(df["M_Eligible"] == "Yes"), "Randomize"] = "Yes" + return df + + @staticmethod + def _midline_normal(df): + df = MidlinePrep.quick_strat_index(df) + + ############## + # Assign Treatment + ################# + treatment = Treatment(seed = study_config.seed*234) + df = treatment.prepare_strat(df = df, continuous_strat=MidlinePrep.continuous_strat,discrete_strat=[]) + + # First assign the stat level treatment variables + df = treatment.assign_treat_var(df = df, + rand_dict = MidlinePrep.bonus_treatment, + stratum_cols = ["Stratifier"], + varname = "BonusTreatment") + + df = treatment.assign_treat_var(df, MidlinePrep.predict, ["Stratifier"], "PredictReward") + df = treatment.assign_treat_var(df, MidlinePrep.blocker, ["Stratifier", "BonusTreatment"], + "M_BlockerTreatment") + + # Create the Django Vars + df["M_BlockerType"] = df["M_BlockerTreatment"].apply(lambda x: MidlinePrep.blocker_spec_dict[x]["BlockerType"]) + df["M_SnoozeDelay"] = df["M_BlockerTreatment"].apply(lambda x: MidlinePrep.blocker_spec_dict[x]["SnoozeDelay"]) + + # Modify the SnoozeDelay for the Delayed Snooze Treatment Group + df = treatment.subset_treat_var_wrapper(df = df, + subset_var="M_BlockerTreatment", + subset_val="DelayedSnooze", + rand_dict=MidlinePrep.delay, + stratum_cols = ["Stratifier","BonusTreatment"], + varname = "M_SnoozeDelay") + + return df + + @staticmethod + def quick_strat_index(df): + for index in varsets.stratification_indices: + chars = varsets.index_class_dict[index] + index_parts = [] + + for var in chars.pos_outcomes + chars.neg_outcomes: + b_var = "B_" + var + n_var = b_var+"N" + + if var in chars.neg_outcomes: + df[n_var] = df[b_var].apply(lambda x: -x) + else: + df[n_var] = df[b_var] + + mean = df[n_var].mean() + sd = df[n_var].std() + print(f"{var}: mean: {mean}, sd: {sd}") + df[n_var] = df[n_var].apply(lambda x: (x - mean) / sd) + index_parts.append(n_var) + df[index + "Manual"] = df[index_parts].sum(axis=1) + mean_i = df[index + "Manual"].mean() + sd_i = df[index + "Manual"].std() + index_parts.append(index + "Manual") + #test_vars = index_parts + ["B_"+x for x in chars.pos_outcomes + chars.neg_outcomes] + #test = df[test_vars] + df["Strat" + index] = df[index + "Manual"].apply(lambda x: (x - mean_i) / sd_i) + return df + + @staticmethod + def _midline_special(df): + df["BonusTreatment"] ="Immediate" + treatment = Treatment(seed = study_config.seed*4) + + df = treatment.assign_treat_var(df, MidlinePrep.predict, ["Special"], "PredictReward") + df = treatment.assign_treat_var(df,MidlinePrep.blocker_special,["Special"],"M_BlockerTreatment") + + df["M_BlockerType"] = df["M_BlockerTreatment"].apply(lambda x: MidlinePrep.blocker_spec_dict[x]["BlockerType"]) + df["M_SnoozeDelay"] = df["M_BlockerTreatment"].apply(lambda x: MidlinePrep.blocker_spec_dict[x]["SnoozeDelay"]) + + return df + + @staticmethod + def _midline_nonrandom_treatment(df): + df["PredictReward"] = "0.25" + + df["BonusTreatment"] = "None" + + df["M_BlockerTreatment"] = "InstantSnooze" + + df["M_BlockerType"] = MidlinePrep.blocker_spec_dict["InstantSnooze"]["BlockerType"] + + df["M_SnoozeDelay"] = MidlinePrep.blocker_spec_dict["InstantSnooze"]["SnoozeDelay"] + + return df + + @staticmethod + def _add_other_treatment_vars(df): + + #Generate proper 'row that counts' by bonuse treatment + df["MPLRandInt"] = np.random.randint(0, 14, len(df)) + df["M_MPLRow"] = df["MPLRandInt"].astype(str) # i.e. what Immediate Bonus treatment will see + df.loc[df["BonusTreatment"]=="None","M_MPLRow"] = 0 + df.loc[df["BonusTreatment"] == "Delayed","M_MPLRow"] = 14 + + #Generate EndlineLimit MPL - Folks with Blocker Treatment == immediate, and a blocker get a random, and potential for blocker change + df["EndlineBlockerMPLRow"] = np.random.randint(0, 11, len(df)) + df.loc[(df["BonusTreatment"] != "Immediate") & (df["M_BlockerTreatment"]!="NoBlocker"),"EndlineBlockerMPLRow"] = 11 + df.loc[ (df["M_BlockerTreatment"] == "NoBlocker"), "EndlineBlockerMPLRow"] = 0 + + df.loc[(df["BonusTreatment"]=="Immediate") & (df["M_BlockerTreatment"]!="NoBlocker"),"EndlineBlockerChange"] = 1 + df.loc[df["EndlineBlockerChange"].isnull(), "EndlineBlockerChange"] = 0 + + df.loc[df[f"B_{study_config.use_var}"].notnull(), "Benchmark"] = df.loc[ + df[f"B_{study_config.use_var}"].notnull(), f"B_{study_config.use_var}"].apply(lambda x: math.ceil(x / 60)) + + df.loc[df["Benchmark"].isnull(), "Benchmark"] = 1 + df.loc[df["Benchmark"] == 0, "Benchmark"] = 1 + + df.loc[df["Benchmark"] > 1, "HourOrHours"] = "hours" + df.loc[df["Benchmark"] == 1, "HourOrHours"] = "hour" + + df["HourlyRate"] = 50 + + return df \ No newline at end of file diff --git a/17/replication_package/code/data/source/clean_master/outcome_variable_cleaners/outcome_cleaner.py b/17/replication_package/code/data/source/clean_master/outcome_variable_cleaners/outcome_cleaner.py new file mode 100644 index 0000000000000000000000000000000000000000..b12f113a17c7e66980f6c85e22f6672b0d7eb4b8 --- /dev/null +++ b/17/replication_package/code/data/source/clean_master/outcome_variable_cleaners/outcome_cleaner.py @@ -0,0 +1,277 @@ +import numpy as np +import string +from datetime import datetime, timedelta +from lib.data_helpers import data_utils +from lib.experiment_specs import study_config +from lib.utilities.labeler import Labeler +import re + +def clean_outcome_vars(df): + df.columns = df.columns.str.replace(' ', '') + df = _calculate_pd_vars(df) + df = _remove_weird_chars(df) + df = Labeler.label_values(df) + df = _add_vars(df) + return df + +def _calculate_pd_vars(df): + """creates additional variables related to phone dashboard data""" + for phase in study_config.phases.keys(): + if datetime.now().date() > study_config.phases[phase]["StartSurvey"]["Start"].date()+timedelta(2): + phase_start = study_config.phases[phase]["StartSurvey"]["Start"].date() + timedelta(1) + phase_end = study_config.phases[phase]["EndSurvey"]["Start"].date() - timedelta(1) + old_code = study_config.phases[phase]["StartSurvey"]["Code"] + + #flexible recruitment start date + if study_config.phases[phase]["StartSurvey"]["Name"] == "Recruitment": + df.loc[df[f"{old_code}_FirstCreated"].notnull(),f"{old_code}_FirstFullDay"] = \ + df.loc[df[f"{old_code}_FirstCreated"].notnull(), f"{old_code}_FirstCreated"] + else: + df[f"{old_code}_FirstFullDay"] = phase_start + + #either the day before the next survey, or two days before today (to allow yesterday data) + df[f"{old_code}_LastFullDay"] = min(phase_end,datetime.now().date()-timedelta(2)) + + # add a 1 b/c it's an open set + df[f"{old_code}_DaySet"] = (df[f"{old_code}_LastFullDay"] - df[f"{old_code}_FirstFullDay"]).apply( + lambda x: x.days + 1) + + df[f"{old_code}_PhaseMissingDays"] = df[f"{old_code}_DaySet"] - df[f"{old_code}_DaysWithUse"] + + if old_code!="R": + df[f"{old_code}_BlackoutHoursPerDay"] = (df[f"{old_code}_BlackoutHours"].fillna(0)/df[f"{old_code}_DaySet"]).round(2) + + return df + +def _remove_weird_chars(df): + for col in ["PhoneUseFeel"]: + for code in ["B","M","E1","E2"]: + col_spec = code + "_" + col + if col_spec in df.columns: + # ensure apostrophes etc are not in the data + df[col_spec] = df[col_spec].replace(np.nan,'nan') + df[col_spec] = df[col_spec].apply(lambda x: ''.join([y for y in x if y in string.printable])) + return df + +def _add_vars(df): + last_survey_complete = data_utils.get_last_survey() + code = study_config.surveys[last_survey_complete]["Code"] + + # Creating limit debug variables from {last_survey_complete} + for app in study_config.fitsby: + app = app.capitalize() + df.loc[df[f"{code}_LimitCheck{app}"].fillna(" ").str.contains("set a limit"), f"ClaimsLimit{app}"] = True + df.loc[(df[f"LimitMinutes{app}"].isnull()) & (df[f"ClaimsLimit{app}"] == True), f"FalseLimit{app}"] = 1 + + df.loc[df[f"{code}_LimitCheck{app}"].fillna(" ").str.contains("trouble setting limit"),f"TroubleLimit{app}"] = 1 + + #need both midline and endline for inconsistent app + for code2 in ["M","E1","E2"]: + df.loc[(df[f"LimitMinutes{app}"] != df[f"{code2}_IdealUsePerApp{app}"].astype(float)) & (df[f"ClaimsLimit{app}"] == True), f"{code2}_InconsistentLimit{app}"] = 1 + + df["FalseLimitCount"] = df[["FalseLimit" + x.capitalize() for x in study_config.fitsby]].sum(axis = 1) + df.loc[df["FalseLimitCount"] > 0,"FalseClaimedLimit"] = "Yes" + + df["TroubleLimitCount"] = df[["TroubleLimit" + x.capitalize() for x in study_config.fitsby]].sum(axis = 1) + df.loc[df["TroubleLimitCount"] > 0, "TroubleLimit"] = "Yes" + + #also make trouble limit equal to yes, if they say their initial limit bug is non empty from survey + df.loc[df[f"{code}_InitialLimitBugs"]!="nan","TroubleLimit"] = "Yes" + + for code2 in ["M","E1","E2"]: + df[f"{code2}_InconsistentLimitCount"] = df[[f"{code2}_InconsistentLimit" + x.capitalize() for x in study_config.fitsby]].sum(axis=1) + df.loc[df[f"{code2}_InconsistentLimitCount"] > 0, f"{code2}_InconsistentLimit"] = "Yes" + + df["FITSBYLimitCount"] = df[[f"LimitMinutes{x.capitalize()}" for x in study_config.fitsby]].count(axis = 1) + + # Create a few outcome variables + for survey in study_config.main_surveys: + code = study_config.surveys[survey]["Code"] + + # add addiction average + if f"{code}_Addiction11" in df.columns: + suffixes = [11,12,13,14,21,22,23,24,31,32,33,34,41,42,43,44] + addiction_vars = [f"{code}_Addiction{x}" for x in suffixes] + print(f"Number of Addiction Vars {len(addiction_vars)}") + df[f"{code}_AddictionAvg"] = df[addiction_vars].mean(axis = 1) + + #Phone Use Reduction Variables + if f"{code}_PhoneUseReduce" in df.columns: + df[f"{code}_IdealUseChange"] = 0.0 + df.loc[df[f"{code}_PhoneUseFeel"]=="I used my smartphone too much.", + f"{code}_IdealUseChange"] = df.loc[df[f"{code}_PhoneUseFeel"]=="I used my smartphone too much.", + f"{code}_PhoneUseReduce"].apply(lambda x: -float(x)) + df.loc[df[f"{code}_PhoneUseFeel"] == "I used my smartphone too little.", + f"{code}_IdealUseChange"] = df.loc[df[f"{code}_PhoneUseFeel"] == "I used my smartphone too little.", + f"{code}_PhoneUseIncrease"].apply(lambda x: float(x)) + + if f"{code}_LifeBetter1" in df.columns: + df[f"{code}_LifeBetter"] = df[f"{code}_LifeBetter1"].astype(float) + df = df.drop(columns = [f"{code}_LifeBetter1"]) + + elif f"{code}_LifeBetter" in df.columns: + df[f"{code}_LifeBetter"] = df[f"{code}_LifeBetter"].astype(float) + + # Text Variables + addiction_cols = [x for x in df.columns if ("AddictionText" in x) & (f"{code}_" in x)] + if len(addiction_cols)>0: + print(addiction_cols) + total_poss = len(addiction_cols) + df[f"{code}_TextCompleteCount"] = df[addiction_cols].count(axis=1) + df[f"{code}_TextMissing"] = total_poss - df[f"{code}_TextCompleteCount"] + + # PD Bug Severity + df["PD_Severity"] = df["PD_Severity"].fillna("NoResponse") + + return df + +###### OLD ############### +def _nan_use_for_survey_incompletes(df): + for survey in ["Baseline","Midline"]: + code = study_config.surveys[survey]["Code"] + if datetime.now() > study_config.surveys[survey]["End"]: + df.loc[df[f"{code}_Complete"] != "Complete",f"{code}_ActualUse"] = float('nan') + #df.loc[df[f"{code}_Complete"] != "Complete", f"{code}_OldActualUse"] = float('nan') + return df + +def _compress_time_vars(df): + "(old name suffix, row in survey question matrix, embedded_data_suffix, new name)" + time_col_specs = { + "PredictUse_1":{ + "EmbeddedSuffix": "PredictUse", + "NewSuffix": "PredictUse"}, + + "PredictReducedUse_1": { + "EmbeddedSuffix": "PredictReduced", + "NewSuffix": "PredictReducedUse"}, + + "CondReducedUse_1":{ + "EmbeddedSuffix": "CondReduced", + "NewSuffix": "CondReducedUse"}, + + "PerceivedUse_1": { + "EmbeddedSuffix": None, + "NewSuffix": "PerceivedUse"}, + + "PerceivedSub_1": { + "EmbeddedSuffix": None, + "NewSuffix": "Studying"}, + + "PerceivedSub_2": { + "EmbeddedSuffix": None, + "NewSuffix": "ScreenAlone"}, + + "PerceivedSub_3": { + "EmbeddedSuffix": None, + "NewSuffix": "NonScreenAlone"}, + + "PerceivedSub_4": { + "EmbeddedSuffix": None, + "NewSuffix": "Socializing"}, + + "PDActualUse_1": { + "EmbeddedSuffix": None, + "NewSuffix": "PDActualUse"}, + + "Q577_1": { + "EmbeddedSuffix": None, + "NewSuffix": "3WeekPredictControl"}, + + "Q582_1": { + "EmbeddedSuffix": None, + "NewSuffix": "3WeekPredictRSI"}, + + } + rename_keys = [x+"_1" for x in list(time_col_specs.keys())] + rename_values = [time_col_specs[x]["NewSuffix"] for x in list(time_col_specs.keys())] + rename_dic = dict(zip(rename_keys,rename_values)) + + for survey in study_config.main_surveys: + + if study_config.surveys[survey]["End"] < datetime.now()-timedelta(2): + survey_code = study_config.surveys[survey]["Code"] + for col, specs in time_col_specs.items(): + old_hr_col = survey_code + "_" + col + "_1" + old_min_col = survey_code + "_" + col + "_2" + + if old_hr_col in df.columns: + + #drop any associated embedded data cols + if specs["EmbeddedSuffix"] != None: + emb_hour = survey_code+"_"+specs["EmbeddedSuffix"]+"Hour" + emb_min = survey_code+"_"+specs["EmbeddedSuffix"]+"Minute" + emb_general = survey_code+"_"+specs["EmbeddedSuffix"] + for emb_data in [emb_hour,emb_min,emb_general]: + if emb_data in df.columns: + df = df.drop(columns=[emb_data]) + + #deal with nan's + for col in [old_hr_col, old_min_col]: + df.loc[df[col] == "nan", col] = "0" + df.loc[df[col].isna(), col] = "0" + df[col] = df[col].astype(float) + + new_col_name = survey_code+"_"+specs["NewSuffix"] + df[new_col_name] = round(df[old_hr_col] + df[old_min_col] / 60,2) + + # drop the raw survey vars + df = df.drop(columns = [old_hr_col,old_min_col]) + + return df + +def _compress_random_order_vars(df): + random_L2H = [x for x in df.columns if "L2H" in x] + for l2h_col in random_L2H: + new_col = l2h_col.replace("L2H", "") + h2l_col = l2h_col.replace("L2H", "H2L") + df[new_col] = df[l2h_col].copy() + #df[new_col] = df[new_col].replace("nan","") + df.loc[df[h2l_col].notnull(), new_col] = df.loc[df[h2l_col].notnull(),h2l_col] + test = df[[l2h_col,h2l_col,new_col]] + + old_vars = [x.replace("B_","") for x in random_L2H] + new_vars = [x.replace("L2H","").replace("B_","") for x in random_L2H] + #codebook.update_char_in_codebook(dict(zip(old_vars,new_vars)),"VariableName") + return df + +# THIS HAS BEEN DEPRECRATED, NOW THAT WE ARE USING FIXED PHASES +def _calculate_PERSONAL_pd_vars(df): + for phase in study_config.phases.keys(): + if datetime.now() > study_config.phases[phase]["StartSurvey"]["Start"] + timedelta(2): + df = data_utils.inpute_missing_survey_datetimes(df, phase) + old_code = study_config.phases[phase]["StartSurvey"]["Code"] + new_code = study_config.phases[phase]["EndSurvey"]["Code"] + start_col = f"{old_code}_SurveyEndDatetime" + end_col = f"{new_code}_SurveyStartDatetime" + + df.loc[df[start_col].notnull(), + f"{old_code}_FirstFullDay"] = df.loc[df[start_col].notnull(), start_col].apply( + lambda x: datetime.date(x) + timedelta(1)) + + df.loc[df[end_col].notnull(), + f"{old_code}_LastFullDay"] = df.loc[df[end_col].notnull(), end_col].apply( + lambda x: datetime.date(x) - timedelta(1)) + + df[f"{old_code}_DaySet"] = (df[f"{old_code}_LastFullDay"] - df[f"{old_code}_FirstFullDay"]).apply( + lambda x: x.days + 1) + + df[f"{old_code}_PhaseMissingDays"] = df[f"{old_code}_DaySet"] - df[f"{old_code}_DaysWithUse"] + + df[f"{old_code}_BlackoutHoursPerDay"] = ( + df[f"{old_code}_BlackoutHours"].fillna(0) / df[f"{old_code}_DaySet"]).round(2) + + """if phase == "Phase3": + if datetime.now().date() > study_config.pe_split_day + timedelta(1): + df[f"{old_code}_SectionDivideDate"] = study_config.pe_split_day + + #excludes the divide date + df[f"{old_code}_FirstDaySet"] = df[f"{old_code}_SectionDivideDate"] - \ + df[f"{old_code}_FirstFullDay"] + + + # includes the divide date + df[f"{old_code}_SecondDaySet"] = (df[f"{old_code}_LastFullDay"] - + df[f"{old_code}_SectionDivideDate"]).apply( + lambda x: x.days + 1)""" + + return df diff --git a/17/replication_package/code/data/source/exporters/exporter.py b/17/replication_package/code/data/source/exporters/exporter.py new file mode 100644 index 0000000000000000000000000000000000000000..0a3696b22b786ef099ade8c43ace275329ae726d --- /dev/null +++ b/17/replication_package/code/data/source/exporters/exporter.py @@ -0,0 +1,45 @@ +import git +import sys +import os +import pandas as pd +from datetime import datetime + +root = git.Repo('.', search_parent_directories=True).working_tree_dir +sys.path.append(root) +os.chdir(os.path.join(root)) +print(root) + + +from data.source.exporters.master_contact_generator import MasterContactGenerator +from data.source.exporters.stata import Stata +from lib.utilities import codebook +from data.source.exporters.tango import Tango +from lib.utilities import serialize + + + +class Exporter: + + @staticmethod + def export_all(clean_master_user): + print(f"\nCreate Master Contacts File, Generate Contact Lists {datetime.now()}") + master_contact_generator = MasterContactGenerator(clean_master_user) + master_contact_generator.generate_contact_lists() + + Tango(clean_master_user) + + print(f"\nExporting Master {datetime.now()}") + codebook_dict = pd.read_csv(codebook.main_codebook_path, index_col="VariableName").to_dict(orient='index') + Stata().general_exporter(clean_master_df = clean_master_user, + cb_dict = codebook_dict, + level_name= "User", + is_wide= True) + + #print(f"\n Exporting Short Answers") + #short_answer.export_short_answers(clean_master_df) + + +if __name__ == "__main__": + + mc = serialize.open_pickle(os.path.join("data","external","intermediate","MasterCleanUser")) + Exporter.export_all(mc) \ No newline at end of file diff --git a/17/replication_package/code/data/source/exporters/stata.py b/17/replication_package/code/data/source/exporters/stata.py new file mode 100644 index 0000000000000000000000000000000000000000..aafb82d1d84194b75eb9a88c7c48d83a1933cf69 --- /dev/null +++ b/17/replication_package/code/data/source/exporters/stata.py @@ -0,0 +1,35 @@ +import os +import pandas as pd +import string +from lib.utilities.labeler import Labeler +from lib.utilities import codebook + +class Stata(): + + def general_exporter(self,clean_master_df: pd.DataFrame, cb_dict: dict, level_name: str, is_wide: bool): + exportable_df = Labeler(sheet=level_name, is_wide=is_wide, codebook_dict=cb_dict).add_labels_to_df(clean_master_df) + + if level_name == "User": + codebook.create_expanded_codebook(exportable_df) + exportable_df = self.remove_bad_chars(exportable_df) + exportable_df = self.stata_reformat(exportable_df) + + exportable_df.to_csv(os.path.join("data","external","intermediate", f"PrepAnalysis{level_name}.csv"), + index=False) + print("exported dataframe for stata") + + """removes strange chars and converts everything to strings""" + def remove_bad_chars(self, df): + df = df.astype(str).applymap(lambda x: x.strip().replace("\n", "").replace('"', '')) + df = df.applymap(lambda x: ''.join([y for y in x if y in string.printable])) + return df + + """export separate csv file for stata analysis + - ensures that each cell takes the first 81 chars + """ + def stata_reformat(self,exportable_master): + stata_export_df = exportable_master.applymap(lambda x: x[0:81]) + return stata_export_df + + + diff --git a/17/replication_package/code/data/source/exporters/tango.py b/17/replication_package/code/data/source/exporters/tango.py new file mode 100644 index 0000000000000000000000000000000000000000..a2d7e194319f6e2404ed485fe1aba8072e1330b9 --- /dev/null +++ b/17/replication_package/code/data/source/exporters/tango.py @@ -0,0 +1,65 @@ +import os +import sys +import pandas as pd +import git + +#root directory of github repo +root = git.Repo('.', search_parent_directories = True).working_tree_dir +os.chdir(root) +sys.path.append(root) + +from lib.experiment_specs import study_config +from datetime import datetime, timedelta +from data.source.clean_master.management.earnings import Earnings +from lib.data_helpers.confidential import Confidential + +class Tango(): + tango_folder = os.path.join("data","external","dropbox_confidential", "Tango") + + def __init__(self, master): + master_pii = Confidential.add_pii(master) + if datetime.now()>study_config.surveys["Baseline"]["End"]: + self.populate_first_tango(master_pii,"Baseline") + + if datetime.now()>study_config.surveys["Endline2"]["End"]: + self.populate_endline_tango(master_pii) + pass + + """creates the tango card for baseline payment""" + def populate_first_tango(self, master,survey): + code = study_config.surveys[survey]["Code"] + completes = master.loc[(master[f"{code}_Complete"] == "Complete"), :].reset_index(drop=True) + completes[f"{survey}_Reward"] = completes["InitialPaymentPart1"] + completes[f"{survey}_Message"] = "" + baseline_tango = self.populate_tango_template(completes, f"{survey}_Reward", f"{survey}_Message") + baseline_tango.to_csv(os.path.join(self.tango_folder, f"{survey}_Tango.csv"), index=False) + + def populate_endline_tango(self, master): + completes = master.loc[master["AdditionalPay"]>0, :].reset_index(drop=True) + completes["Message"] = completes["OverallMessage"] + + endline_tango = self.populate_tango_template(completes, "AdditionalPay", "Message") + endline_tango.to_csv(os.path.join(self.tango_folder, "Endline_Tango.csv"), index=False) + print("Created Endline Tango Card") + return endline_tango + + def populate_tango_template(self,completes,reward_col,reward_message): + template = pd.read_csv(os.path.join(self.tango_folder, "Tango_Template.csv")) + template_columns = [x for x in template.columns.values if "Unnamed" not in x] + tango = pd.DataFrame(columns=template_columns) + tango["Recipient Email (required)"] = completes["MainEmail"] + tango['Recipient First Name (required)'] = completes["FirstName"] + tango["Reward Amount (required)"] = completes[reward_col] + tango["Reward Message"] = completes[reward_message] + tango["ETID (required)"] = "E095108" + tango["UTID (required)"] = "U151754" + tango["E2_Complete"] = completes["E2_Complete"] + tango["RevealConfirm"] = completes["M_RevealConfirm"] + return tango + + +if __name__ == "__main__": + tango_survey = "Baseline" + if tango_survey == "Baseline": + b = pd.read_csv(os.path.join("data","external","intermediate","Surveys","Baseline.csv")) + Tango(b) diff --git a/17/replication_package/code/data/source/prep_stata.do b/17/replication_package/code/data/source/prep_stata.do new file mode 100644 index 0000000000000000000000000000000000000000..45b2c529f0794d54f3af5a3481c110ab50ff454d --- /dev/null +++ b/17/replication_package/code/data/source/prep_stata.do @@ -0,0 +1,47 @@ + +/* Prep Master Data Files.do */ + +clear all +cd ".." + + +foreach level in "User" "UserAppDay" { + import delimited "data/external/intermediate/PrepAnalysis`level'.csv", varname(1) clear stringcols(_all) case(preserve) + + *recode NA's and destring numerical values + foreach var of varlist _all{ + display "`var'" + label variable `var' "`=`var'[1]'" + replace `var'="" if _n==1 + replace `var' = strrtrim(`var') + replace `var'="" if `var' =="nan" + + destring `var', replace + } + + * A few important manual recodes: + tostring AppCode, replace + + + * Encode Categorical Variables + foreach var of varlist _all { + capture confirm variable `var'_str + if !_rc { + capture assert `var' == int(`var') + if _rc { + replace `var'=`var'*100 + } + display "label `var'" + labmask `var', values(`var'_str) + order `var', after(`var'_str) + drop `var'_str + } + } + + * DROP OUTCOME VARIABLE ENCODING FOR NOW + keep if AppCode != "" + + + saveold "data/external/final/Analysis`level'.dta", replace + +} diff --git a/17/replication_package/code/data/source/run.py b/17/replication_package/code/data/source/run.py new file mode 100644 index 0000000000000000000000000000000000000000..f84f4d773b3243476ec15dee02066ba3666ebbbb --- /dev/null +++ b/17/replication_package/code/data/source/run.py @@ -0,0 +1,55 @@ +from datetime import datetime +import os +import sys +import git + +#importing modules from root of data +root = '../..' +sys.path.append(root) +os.chdir(os.path.join(root)) + +from pympler.tracker import SummaryTracker +from data.source.build_master.builder import Builder +from data.source.clean_master.cleaner import Cleaner +from data.source.exporters.exporter import Exporter +from lib.utilities import serialize, codebook + + +def run(): + + tracker = SummaryTracker() + print(f"build master on user_level and user_app_day level {datetime.now()}") + config_user_dic = serialize.open_yaml("config_user.yaml") + + if config_user_dic["local"]["log"] == True: + log_file = open(os.path.join("data", "log", "mb_logger.log"), "w") + sys.stdout = log_file + + if config_user_dic["local"]["skip_building"] == False: + codebook.initialize_main_codebook() + raw_master_df = Builder.build_master() + + else: + if config_user_dic["local"]["test"]: + print("cant skip build in test run") + sys.exit() + else: + codebook.update_master_specs() + raw_master_df = serialize.open_pickle(os.path.join("data","external","intermediate","MasterIntermediateUser")) + + print(f"\nClean Master_User {datetime.now()}") + tracker.print_diff() + clean_master_df = Cleaner().clean_master(raw_master_df) + + print(f"\n Create External Files {datetime.now()}") + if config_user_dic["local"]["test"]: + print("skipping") + else: + Exporter.export_all(clean_master_df) + + print("Complete!") + if config_user_dic["local"]["log"] == True: + log_file.close() + +if __name__ == "__main__": + run() diff --git a/17/replication_package/code/data/temptation/code/.Rhistory b/17/replication_package/code/data/temptation/code/.Rhistory new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/data/temptation/code/aggregate_dashboard.R b/17/replication_package/code/data/temptation/code/aggregate_dashboard.R new file mode 100644 index 0000000000000000000000000000000000000000..70958314796302ddb60881e09018a780f6de260b --- /dev/null +++ b/17/replication_package/code/data/temptation/code/aggregate_dashboard.R @@ -0,0 +1,606 @@ +# Aggregate Phone Dashboard outcomes + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Environment +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +# Import libraries +library(tidyverse) +library(magrittr) +library(janitor) +library(rio) +library(lubridate) + +# Global variables +FITSBY <- + c('facebook', + 'instagram', + 'twitter', + 'snapchat', + 'browser', + 'youtube') + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Main +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +main <- function() { + # Dashboard data + daily <- + import_daily() %>% + categorize_apps %>% + filter(date <= ymd('2020-10-04')) # CHANGE + + hourly <- + import_hourly() %>% + filter(date <= ymd('2020-10-04')) # CHANGE + + # Survey data + users <- + import_users() %>% + get_phase_days() + + # Intermediate data + usage <- get_usage_by_app_phase(daily) + tight <- get_tightness_by_app_phase(daily) + snooze <- get_snooze_by_app_phase(daily) + + # Final data + usage_by_category <- reshape_usage_by_category(usage, users) + usage_by_hour <- reshape_usage_by_hour(hourly, daily, users) + tight_by_app <- reshape_tightness_by_app(tight, users) + measures <- reshape_measures(daily, usage, tight, snooze, users, filter = F) + measures_fitsby <- reshape_measures(daily, usage, tight, snooze, users, filter = T) + + # Merged data + df <- + usage_by_category %>% + left_join(usage_by_hour, by = c('app_code')) %>% + left_join(tight_by_app, by = c('app_code')) %>% + left_join(measures, by = c('app_code')) %>% + left_join(measures_fitsby, by = c('app_code')) %>% + rename(AppCode = app_code) + + export(df, 'temp/dashboard.dta') +} + +reshape_measures <- function(daily, usage, tight, snooze, users, filter){ + # Filter apps + if (filter == T) { + daily %<>% + filter(app %in% FITSBY) + + usage %<>% + filter(app %in% FITSBY) + + tight %<>% + filter(app %in% FITSBY) + + snooze %<>% + filter(app %in% FITSBY) + } + + total_usage <- reshape_average(usage, alt_use_minutes, "Usage", users) + total_tight <- reshape_total(tight, limit_tightness, "LimitTight") + total_snooze_min <- reshape_average(snooze, total_snooze_minutes, "SnoozeMin", users) + total_snooze_count <- reshape_average(snooze, snooze_enabled, "SnoozeCount", users) + usage_by_week <- reshape_usage_by_week(daily, start_date = '2020-04-12') + usage_by_day <- reshape_usage_by_day(daily, start_date = '2020-04-12') + + # Merge data + df <- + usage_by_week %>% + left_join(usage_by_day, by = c('app_code')) %>% + left_join(total_usage, by = c('app_code')) %>% + left_join(total_tight, by = c('app_code')) %>% + left_join(total_snooze_min, by = c('app_code')) %>% + left_join(total_snooze_count, by = c('app_code')) + + # Rename variables + if (filter == T) { + df %<>% + rename_at(vars(-app_code), list(~paste0(., '_fitsby'))) + } + + return(df) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Data functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +import_daily <- function() { + daily <- + import('external/phone_addiction/Data/Temptation/Intermediate/MasterUserAppDay.csv') %>% + clean_names %>% + mutate_at(vars(date), ymd) + + return(daily) +} + +import_users <- function() { + users <- + import('external/phone_addiction/Data/Temptation/Final/AnalysisUser.dta') %>% + clean_names + + return(users) +} + +import_hourly <- function() { + hourly <- + import('temp/hourly.csv') %>% + clean_names %>% + mutate_at(vars(date), ymd) + + return(hourly) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Cleaning functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +categorize_apps <- function(daily) { + # Get categories for top apps + top <- + import('raw/top_apps_cleaned.csv') %>% + select(-comments) + + # Merge in app categories + daily %<>% + left_join(top, by = c('app')) + + return(daily) +} + +get_phase_days <- function(users) { + # Get phase day variable + users %<>% + select(app_code, contains('days_with_use'), -contains('week')) %>% + rename_all(list(~gsub('days_with_use', 'phase_days', .))) + + # Rename phases + users %<>% + gather(key = 'key', value = 'value', -app_code) %>% + mutate_at(vars(key), list(~gsub('^r_', 'P0:', .))) %>% + mutate_at(vars(key), list(~gsub('^b_', 'P1:', .))) %>% + mutate_at(vars(key), list(~gsub('^m_', 'P2:', .))) %>% + mutate_at(vars(key), list(~gsub('^e1_', 'P3:', .))) %>% + mutate_at(vars(key), list(~gsub('^e2_', 'P4:', .))) %>% + mutate_at(vars(key), list(~gsub('^p5_', 'P5:', .))) %>% + mutate_at(vars(key), list(~gsub('^p6_', 'P6:', .))) %>% + mutate_at(vars(key), list(~gsub('^p7_', 'P7:', .))) %>% + mutate_at(vars(key), list(~gsub('^p8_', 'P8:', .))) %>% + mutate_at(vars(key), list(~gsub('^p9_', 'P9:', .))) + + + + # Reshape data + users %<>% + separate(key, c('phase', 'variable'), sep = ':', extra = 'merge') %>% + spread(key = 'variable', value = 'value', fill = 0) + + # Calculate total number of post-midline phase days + post_midline <- + users %>% + filter(phase %in% c('P2', 'P3', 'P4')) %>% + group_by(app_code) %>% + summarize_at(vars(phase_days), sum) %>% + ungroup %>% + mutate(phase = 'P432') + + users %<>% + bind_rows(post_midline) + + # Calculate the number of endline phase days + endline <- + users %>% + filter(phase %in% c('P3', 'P4')) %>% + group_by(app_code) %>% + summarize_at(vars(phase_days), sum) %>% + ungroup %>% + mutate(phase = 'P43') + + users %<>% + bind_rows(endline) + + # Calculate total period 2 to 5 + post_midline <- + users %>% + filter(phase %in% c('P2', 'P3', 'P4', 'P5')) %>% + group_by(app_code) %>% + summarize_at(vars(phase_days), sum) %>% + ungroup %>% + mutate(phase = 'P5432') + + users %<>% + bind_rows(post_midline) + + return(users) +} + +assign_phases <- function(daily) { + # Rename phases + phase <- + daily %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase0', 'P0', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase1', 'P1', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase2', 'P2', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase3', 'P3', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase4', 'P4', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase5', 'P5', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase6', 'P6', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase7', 'P7', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase8', 'P8', .))) %>% + mutate_at(vars(phase), list(~ifelse(. == 'Phase9', 'P9', .))) + + # Append post-midline phase + post_midline <- + phase %>% + filter(phase %in% c('P2', 'P3', 'P4')) %>% + mutate(phase = 'P432') + + phase %<>% + bind_rows(post_midline) + + # Append endline phase + endline <- + phase %>% + filter(phase %in% c('P3', 'P4')) %>% + mutate(phase = 'P43') + + phase %<>% + bind_rows(endline) + + # Append p2-p5 + endline <- + phase %>% + filter(phase %in% c('P2', 'P3', 'P4', 'P5')) %>% + mutate(phase = 'P5432') + + phase %<>% + bind_rows(endline) + + return(phase) +} + + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Audit functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +check_usage <- function(daily, users) { + # Get number of non-zero usage days + check <- + daily %>% + assign_phases %>% + group_by(app_code, date, phase) %>% + summarize_at(vars(alt_use_minutes), sum, na.rm = T) %>% + ungroup + + check %<>% + filter(alt_use_minutes > 0) %>% + group_by(app_code, phase) %>% + summarize(count = n()) %>% + ungroup + + # Merge with Michael's non-zero usage days + check %<>% + filter(phase != '') %>% + left_join(users, by = c('app_code', 'phase')) %>% + mutate(diff = count - phase_days) + + table(check$diff) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Aggregation functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +get_usage_by_app_phase <- function(daily) { + # Assign phases + daily %<>% + assign_phases + + # Aggregate usage + usage <- + daily %>% + group_by(app_code, app, category, phase) %>% + summarize_at(vars(alt_use_minutes), sum, na.rm = T) %>% + ungroup + + return(usage) +} + +get_limit_by_app_phase <- function(daily) { + # Assign phases + daily %<>% + assign_phases + + # Aggregate limits + limits <- + daily %>% + filter(is.na(limit_minutes) == F) + + limits %<>% + group_by(app_code, app, category, phase) %>% + summarize_at(vars(limit_minutes), mean, na.rm = T) %>% + ungroup + + return(limits) +} + +get_tightness_by_app_phase <- function(daily) { + # Assign phases + daily %<>% + assign_phases + + # Get limits + limits <- get_limit_by_app_phase(daily) + + # Calculate limit tightness + tight <- + daily %>% + filter(phase == 'P1') %>% + select(app_code, app, category, alt_use_minutes) %>% + left_join(limits, by = c('app_code', 'app')) %>% + filter(is.na(phase) == F) + + tight %<>% + mutate(limit_tightness = pmax(alt_use_minutes - limit_minutes, 0)) %>% + mutate_at(vars(limit_tightness), list(~ifelse(is.na(.), 0, .))) + + # Aggregate limit tightness + tight %<>% + group_by(app_code, app, phase) %>% + summarize_at(vars(limit_tightness), mean, na.rm = T) %>% + ungroup + + return(tight) +} + +get_snooze_by_app_phase <- function(daily, winsorize = 60) { + # Assign phases + daily %<>% + assign_phases + + # Winsorize snooze + snooze <- + daily %>% + filter(is.na(snooze_enabled) == F | is.na(total_snooze_minutes) == F) + + snooze %<>% + mutate(total_snooze_minutes = ifelse(total_snooze_minutes > winsorize, winsorize, total_snooze_minutes)) + + # Aggregate snooze + snooze %<>% + group_by(app_code, app, category, phase) %>% + summarize_at(vars(snooze_enabled, total_snooze_minutes), sum, na.rm = T) + + return(snooze) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Reshape functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +reshape_average <- function(data, measure, name, users) { + # Enquo inputs + enquo_measure <- enquo(measure) + + # Get total + data %<>% + group_by(app_code, phase) %>% + summarize_at(vars(!!enquo_measure), sum, na.rm = T) %>% + ungroup + + # Get daily average + data %<>% + left_join(users, by = c('app_code', 'phase')) %>% + mutate(!!enquo_measure := !!enquo_measure / phase_days) %>% + select(-phase_days) + + # Rename phase + data %<>% + filter(phase != '') %>% + mutate_at(vars(phase), list(~paste0('PD_', ., '_', name))) + + # Reshape data + data %<>% + spread(key = phase, value = !!enquo_measure) + + return(data) +} + +reshape_total <- function(data, measure, name) { + # Enquo inputs + enquo_measure <- enquo(measure) + + # Get total + data %<>% + group_by(app_code, phase) %>% + summarize_at(vars(!!enquo_measure), sum, na.rm = T) %>% + ungroup + + # Rename phase + data %<>% + filter(phase != '') %>% + mutate_at(vars(phase), list(~paste0('PD_', ., '_', name))) + + # Reshape data + data %<>% + spread(key = phase, value = !!enquo_measure) + + return(data) +} + +reshape_usage_by_category <- function(usage, users) { + # Get usage by category + usage_by_category <- + usage %>% + group_by(app_code, category, phase) %>% + summarize_at(vars(alt_use_minutes), sum, na.rm = T) %>% + ungroup + + # Get daily average + usage_by_category %<>% + left_join(users, by = c('app_code', 'phase')) %>% + mutate(alt_use_minutes = alt_use_minutes / phase_days) %>% + select(-phase_days) + + # Append other + other <- + usage_by_category %>% + filter(!(category %in% FITSBY)) %>% + mutate(category = "other") + + other %<>% + group_by(app_code, category, phase) %>% + summarize_at(vars(alt_use_minutes), sum, na.rm = T) %>% + ungroup + + usage_by_category %<>% + bind_rows(other) + + # Rename measure + usage_by_category %<>% + filter(is.na(category) == F & category != '') %>% + filter(phase != '') %>% + mutate_at(vars(phase), list(~paste0('PD_', ., '_Usage_'))) %>% + mutate(key = paste0(phase, category)) + + # Reshape data + usage_by_category %<>% + select(app_code, key, alt_use_minutes) %>% + spread(key = key, value = alt_use_minutes) + + return(usage_by_category) +} + +reshape_tightness_by_app <- function(tight, users) { + # Group non-FITSBY apps + tight_by_app <- + tight %>% + mutate_at(vars(app), list(~ifelse(. %in% FITSBY, ., 'other'))) + + # Aggregate non-FITSBY apps + tight_by_app %<>% + group_by(app_code, app, phase) %>% + summarize_at(vars(limit_tightness), sum, na.rm = T) %>% + ungroup + + # Rename measure + tight_by_app %<>% + filter(phase != '') %>% + mutate_at(vars(phase), list(~paste0('PD_', ., '_LimitTight_'))) + + # Reshape data + tight_by_app %<>% + mutate(app = paste0(phase, app)) %>% + select(-phase) %>% + spread(key = app, value = limit_tightness) + + return(tight_by_app) +} + +reshape_usage_by_week <- function(daily, start_date, max_weeks = 21) { + # Get week + usage_by_week <- + daily %>% + mutate(week = floor(as.numeric(date - ymd(start_date)) / 7) + 1) + + # Aggregate data + usage_by_week %<>% + group_by(app_code, week) %>% + summarize_at(vars(alt_use_minutes), sum, na.rm = T) %>% + ungroup %>% + mutate(alt_use_minutes = alt_use_minutes / 7) + + + # Rename measure + usage_by_week %<>% + filter(week %in% 1:max_weeks) %>% + mutate(week = paste0('PD_WeeklyUsage_', week)) + + # Reshape data + usage_by_week %<>% + spread(key = week, value = alt_use_minutes) + + return(usage_by_week) +} + +reshape_usage_by_day <- function(daily, start_date, max_days = 63) { + # Get week + usage_by_day <- + daily %>% + mutate(day = as.numeric(date - ymd(start_date) + 1)) + + # Aggregate data + usage_by_day %<>% + group_by(app_code, day) %>% + summarize_at(vars(alt_use_minutes), sum, na.rm = T) %>% + ungroup + + # Rename measure + usage_by_day %<>% + filter(day %in% 1:max_days) %>% + mutate(day = paste0('PD_DailyUsage_', day)) + + # Reshape data + usage_by_day %<>% + spread(key = day, value = alt_use_minutes) + + return(usage_by_day) +} + +reshape_usage_by_hour <- function(hourly, daily, users) { + # Get phase + phase <- + daily %>% + assign_phases %>% + select(app_code, date, phase) %>% + distinct + + # Merge in phase + usage_by_hour <- + hourly %>% + left_join(phase, by = c('app_code', 'date')) + + # Group hours + usage_by_hour %<>% + mutate(hour = ifelse(hour %% 2 == 0, hour + 1, hour)) + + # Aggregate usage + usage_by_hour %<>% + group_by(app_code, phase, hour) %>% + summarize_at(vars(contains("use_minutes")), sum, na.rm = T) %>% + ungroup + + # Get daily hourly average + usage_by_hour %<>% + left_join(users, by = c('app_code', 'phase')) %>% + mutate_at(vars(contains("use_minutes")), list(~. / (phase_days * 2))) %>% + select(-phase_days) + + # Gather data + usage_by_hour %<>% + gather(key = "key", value = "value", -app_code, -phase, -hour) %>% + mutate_at(vars(key), list(~gsub("use_minutes", "", .))) + + # Rename measure + usage_by_hour %<>% + filter(phase != '') %>% + mutate_at(vars(phase), list(~paste0('PD_', ., '_Usage_Hour', hour, key))) %>% + select(-hour, -key) + + # Reshape data + usage_by_hour %<>% + spread(key = phase, value = value) + + return(usage_by_hour) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Execute +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +main() \ No newline at end of file diff --git a/17/replication_package/code/data/temptation/code/clean_data.do b/17/replication_package/code/data/temptation/code/clean_data.do new file mode 100644 index 0000000000000000000000000000000000000000..caf2a0750418f8496f9cd77851c9a75feaa60b44 --- /dev/null +++ b/17/replication_package/code/data/temptation/code/clean_data.do @@ -0,0 +1,588 @@ +// Create analysis dataset from survey and Phone Dashboard data + +*************** +* Environment * +*************** + +clear all +adopath + "input/lib/ado" +adopath + "input/lib/stata/ado" + +********************* +* Utility functions * +********************* + +program define_constants + yaml read YAML using "input/config.yaml" + + yaml global BONUS = YAML.metadata.payment.bonus + yaml global FIXED_RATE = YAML.metadata.payment.fixed_rate + + global HIST_SETTINGS /// + ytitle("Fraction of sample" " ") /// + legend(region(lcolor(white))) /// + graphregion(color(white)) +end + +********************** +* Analysis functions * +********************** + +program main + define_constants + import_manual_coding + import_data + standardize_data + + check_snooze + winsorize_data + clean_data + check_predictions + save_data +end + +program import_manual_coding + * Import data + import excel using "external/phone_addiction/Confidential/Temptation/QualitativeFeedback/Baseline_QualitativeFeedback.xlsx", /// + clear firstrow sheet("COVID_All") + + * Rename variables + keep sum AppCode + rename sum B_CovidChangeReason + + * Clean data + drop if AppCode == "AppCode" + replace AppCode = "A" + AppCode + + * Export data + save "temp/manual.dta", replace +end + +program import_data + clear + use "external/phone_addiction/Data/Temptation/Final/AnalysisUser.dta", replace + merge 1:1 AppCode using "temp/dashboard.dta", nogen + merge 1:1 AppCode using "temp/installed.dta", nogen + merge 1:1 AppCode using "temp/manual.dta", nogen + merge 1:1 AppCode using "temp/pd_usage.dta", nogen +end + +program include_label_code + * Remove existing labels + foreach var of varlist _all { + label var `var' "" + } + + * Label variables + include "temp/label_code.do" + + * Drop unlabeled variables + ds, not(varlabel) + cap drop `r(varlist)' +end + +program standardize_data + include "temp/rename_code.do" + include "temp/tostring_code.do" + include "temp/sdecode_code.do" + include "temp/replace_code.do" + include "temp/destring_code.do" + include_label_code + include "temp/label_var_code.do" +end + +program get_analytic_sample + keep if S2_RevealConfirm == 1 & S3_Bonus <= 1 & PD_P5_Usage != . & S4_Finished == 1 +end + +program check_snooze + * Preserve data + preserve + + * Get sample + get_analytic_sample + + * Plot data + twoway hist PD_P2_SnoozeCount, frac /// + bcolor(maroon) /// + xtitle(" " "Snoozes per day") /// + legend(order(1 "Period 2" 2 "Periods 3 & 4")) /// + $HIST_SETTINGS + + twoway hist PD_P2_SnoozeMin, frac /// + bcolor(maroon) /// + xtitle(" " "Snooze minutes per day") /// + legend(order(1 "Period 2" 2 "Periods 3 & 4")) /// + $HIST_SETTINGS + + twoway hist PD_P2_SnoozeMinFITSBY, frac /// + bcolor(maroon) /// + xtitle(" " "Snooze minutes per day") /// + legend(order(1 "Period 2" 2 "Periods 3 & 4")) /// + $HIST_SETTINGS + + * Restore data + restore +end + +program check_predictions + * Preserve data + preserve + + * Get sample + get_analytic_sample + keep if S3_Bonus == 0 & S2_LimitType == 0 + + * Reshape data + gen S2_FITSBYUsage = PD_P2_UsageFITSBY + gen S3_FITSBYUsage = PD_P3_UsageFITSBY + + local w 250 + + foreach time in S2 S3 { + gen UsageDiff_`time' = `time'_FITSBYUsage - `time'_PredictUseNext_1 + + replace UsageDiff_`time' = `w' /// + if UsageDiff_`time' >= `w' + + replace UsageDiff_`time' = -`w' /// + if UsageDiff_`time' <= -`w' + } + + * Plot data + twoway (hist UsageDiff_S2, width(30) color(gray%50)) /// + (hist UsageDiff_S3, width(30) color(maroon%33)), /// + xtitle(" " "Actual usage minus predicted usage" "(minutes/day)") /// + legend(order(1 "Period 2" 2 "Period 3")) /// + $HIST_SETTINGS + + graph export "output/hist_predicted_actual.pdf", replace + + * Restore data + restore +end + +program winsorize_data + * Snooze + local w 120 + + foreach time in P2 P3 P4 P5 P43 P432 P5432 { + foreach fitsby in "" FITSBY { + gen PD_`time'_SnoozeMin_W`fitsby' = PD_`time'_SnoozeMin`fitsby' + + replace PD_`time'_SnoozeMin_W`fitsby' = `w' /// + if PD_`time'_SnoozeMin_W`fitsby' >= `w' & PD_`time'_SnoozeMin_W`fitsby' != . + } + } + + * Predicted usage + gen S2_ActualUse = PD_P2_UsageFITSBY + gen S3_ActualUse = PD_P3_UsageFITSBY + gen S4_ActualUse = PD_P4_UsageFITSBY + gen S5_ActualUse = PD_P5_UsageFITSBY + + local w 60 + + foreach time in S2 { + gen UsageDiff_`time' = `time'_ActualUse - `time'_PredictUseInitial + + replace UsageDiff_`time' = `w' /// + if UsageDiff_`time' > `w' & UsageDiff_`time' > `w' != . + + replace UsageDiff_`time' = -`w' /// + if UsageDiff_`time' < -`w' & UsageDiff_`time' < -`w' != . + + gen `time'_PredictUseInitial_W = `time'_ActualUse - UsageDiff_`time' + + drop UsageDiff_`time' + } + + foreach s of numlist 2/4 { + foreach prediction of numlist 1/3 { + + local survey S`s' + local predicted_pd = `s' + `prediction' - 1 + local time S`predicted_pd' + + if (`predicted_pd' < 6){ + cap drop UsageDiff + gen UsageDiff = `time'_ActualUse - `survey'_PredictUseNext_`prediction' + + replace UsageDiff = `w' /// + if UsageDiff > `w' & UsageDiff > `w' != . + + replace UsageDiff = -`w' /// + if UsageDiff < -`w' & UsageDiff < -`w' != . + + gen `survey'_PredictUseNext_`prediction'_W = `time'_ActualUse - UsageDiff + + } + } + } + + local w 100 + foreach s of numlist 2/4 { + foreach prediction of numlist 1/3 { + + local survey S`s' + local predicted_pd = `s' + `prediction' - 1 + local time S`predicted_pd' + + if (`predicted_pd' < 6){ + cap drop UsageDiff + gen UsageDiff = `time'_ActualUse - `survey'_PredictUseNext_`prediction' + + replace UsageDiff = `w' /// + if UsageDiff > `w' & UsageDiff > `w' != . + + replace UsageDiff = -`w' /// + if UsageDiff < -`w' & UsageDiff < -`w' != . + + gen `survey'_PredictUseNext_`prediction'_W100 = `time'_ActualUse - UsageDiff + + } + } + } + + drop S2_ActualUse S3_ActualUse S4_ActualUse S5_ActualUse UsageDiff +end + +program clean_data + * Age group + gen S0_Age_temp = S0_Age if S0_Age >= 18 & S0_Age <= 64 + recode S0_Age_temp /// + (18/22 = 1 "Ages 18-22") /// + (23/30 = 2 "Ages 23-30") /// + (31/40 = 3 "Ages 31-40") /// + (41/50 = 4 "Ages 41-50") /// + (51/64 = 5 "Ages 51-64"), /// + gen(AgeGroup) + drop S0_Age_temp + + * Limit type + gen S2_LimitType_recode = S2_LimitType + replace S2_LimitType_recode = 1 if S2_LimitType == 1 & S2_SnoozeDelay == 0 + replace S2_LimitType_recode = 2 if S2_LimitType == 1 & S2_SnoozeDelay == 2 + replace S2_LimitType_recode = 3 if S2_LimitType == 1 & S2_SnoozeDelay == 5 + replace S2_LimitType_recode = 4 if S2_LimitType == 1 & S2_SnoozeDelay == 20 + replace S2_LimitType_recode = 5 if S2_LimitType == 2 + + rename S2_LimitType S2_LimitGroup + recode S2_LimitType_recode /// + (0 = 0 "No limit") /// + (1 = 1 "Snooze 0") /// + (2 = 2 "Snooze 2") /// + (3 = 3 "Snooze 5") /// + (4 = 4 "Snooze 20") /// + (5 = 5 "No snooze"), /// + gen(S2_LimitType) + drop S2_LimitType_recode + + * Code limit tightness as 0 if do not set a limit on this but could + foreach var of varlist PD_P*LimitTight PD_P*LimitTightFITSBY { + replace `var' = 0 if `var' == . & S2_LimitType > 0 + } + + + * Interest in limits + foreach time in S1 { + recode `time'_InterestInLimits /// + (1 = 4 "Very") /// + (2 = 3 "Moderately") /// + (3 = 2 "Slightly") /// + (4 = 1 "Not at all"), /// + gen(`time'_InterestInLimits_recode) + + drop `time'_InterestInLimits + rename `time'_InterestInLimits_recode `time'_InterestInLimits + } + + * Addiction scale + foreach var of varlist *_Addiction_* { + replace `var' = (`var' - 1) / 4 + } + + foreach time in S1 S3 S4 { + egen `time'_AddictionIndex = rowtotal(`time'_Addiction_*) + replace `time'_AddictionIndex = -1 * `time'_AddictionIndex + } + + * Re scale wellbeing to be from -1 to +1 + foreach time in S1 S3 S4 { + foreach num of numlist 1/7{ + replace `time'_WellBeing_`num' = `time'_WellBeing_`num' / 3 + label define `time'_WellBeing_`num' -1 "Strongly disagree" 0 "Neither agree nor disagree" 1 "Strongly agree", replace + } + } + + * Subjective well-being index + foreach time in S1 S3 S4 { + replace `time'_WellBeing_3 = -1 * `time'_WellBeing_3 + replace `time'_WellBeing_4 = -1 * `time'_WellBeing_4 + replace `time'_WellBeing_6 = -1 * `time'_WellBeing_6 + egen `time'_SWBIndex = rowtotal(`time'_WellBeing_*), missing + } + + * Re scale SMS addiction to be from -1 to +1 + foreach time in S1 S2 S3 { + foreach num of numlist 1/9{ + replace `time'_AddictionText_`num' = (`time'_AddictionText_`num' - 5.5) / 4.5 + label define `time'_AddictionText_`num' -1 "Not at all" 1 "Definitely", replace + } + } + + * SMS index + foreach time in S1 S2 S3 { + replace `time'_AddictionText_3 = -1 * `time'_AddictionText_3 + egen `time'_SMSIndex = rowtotal(`time'_AddictionText_*), missing + replace `time'_SMSIndex = -1 * `time'_SMSIndex + } + + * re-index so that + gen S4_SMSIndex = S3_SMSIndex + replace S3_SMSIndex = S2_SMSIndex + drop S2_SMSIndex + + foreach num of numlist 1/3 { + foreach subweek of numlist 1/3 { + local week = (`num' - 1) * 3 + `subweek' + local q1 = (`subweek' - 1) * 3 + 1 + local q2 = (`subweek' - 1) * 3 + 2 + local q3 = (`subweek' - 1) * 3 + 3 + egen Week`week'_SMSIndex = rowtotal(S`num'_AddictionText_`q1' S`num'_AddictionText_`q2' S`num'_AddictionText_`q3'), missing + } + } + + * Phone use feelings + foreach time in S1 S3 S4 { + gen `time'_PhoneUseChange = 0 + replace `time'_PhoneUseChange = -`time'_PhoneUseReduce if `time'_PhoneUseReduce != . + replace `time'_PhoneUseChange = `time'_PhoneUseIncrease if `time'_PhoneUseIncrease != . + } + gen S1_PhoneUseChange2019 = 0 + replace S1_PhoneUseChange2019 = - S1_PhoneUseReduce2019 if S1_PhoneUseReduce2019 != . + replace S1_PhoneUseChange2019 = S1_PhoneUseIncrease2019 if S1_PhoneUseIncrease2019 != . + + gen S4_Substitution = 0 + replace S4_Substitution = - S4_SubLess if S4_SubLess != . + replace S4_Substitution = S4_SubMore if S4_SubMore != . + + local w 150 + gen S4_Substitution_W = S4_Substitution + replace S4_Substitution_W = `w' if S4_Substitution_W > `w' + replace S4_Substitution_W = -`w' if S4_Substitution_W < -`w' + + + * Predicted earnings + foreach time in S2 { + foreach measure in PredictUseInitial { + gen `time'_`measure'Earn = `time'_Benchmark - (`time'_`measure'_W / 60) + replace `time'_`measure'Earn = `time'_`measure'Earn * $BONUS + replace `time'_`measure'Earn = max(0, `time'_`measure'Earn) + replace `time'_`measure'Earn = min(150, `time'_`measure'Earn) + } + } + + foreach time in S2 { + foreach measure in PredictUseBonus { + gen `time'_`measure'Earn = `time'_PredictUseInitial_W * (1 - `time'_`measure' / 100) + replace `time'_`measure'Earn = `time'_Benchmark - (`time'_`measure'Earn / 60) + replace `time'_`measure'Earn = `time'_`measure'Earn * $BONUS + replace `time'_`measure'Earn = max(0, `time'_`measure'Earn) + replace `time'_`measure'Earn = min(150, `time'_`measure'Earn) + } + } + + * Bonus willingness-to-pay + foreach time in S2 { + foreach num of numlist 1/14 { + replace `time'_MPL_`num' = 15 if `time'_MPL_`num' == 1 + replace `time'_MPL_`num' = `num' if `time'_MPL_`num' == 2 + } + + egen `time'_MPL = rowmin(`time'_MPL_*) + + * ORIGINAL VALUES + * 1 = 3 | 2 = 3 | 3 = 2.5 | 4 = 2 | 5 = 1.8 + * 6 = 1.6 | 7 = 1.4 | 8 = 1.2 | 9 = 1 | 10 = 0.8 + * 11 = 0.6 | 12 = 0.4 | 13 = 0.2 | 14 = 0 | + + * We note that the value of `time'_MPL is the lowest `time'_MPL_x such that + * the respondent prefers the RSI to the payment. Thus we average between + * the value of x and the value from x - 1. + recode `time'_MPL /// + (1 = 3.5 ) /// + (2 = 3.5 ) /// + (3 = 2.75) /// + (4 = 2.25) /// + (5 = 1.9 ) /// + (6 = 1.7 ) /// + (7 = 1.5 ) /// + (8 = 1.3 ) /// + (9 = 1.1 ) /// + (10 = 0.9 ) /// + (11 = 0.7 ) /// + (12 = 0.5 ) /// + (13 = 0.3 ) /// + (14 = 0.1 ) /// + (15 = 0 ) + replace `time'_MPL = `time'_MPL * $FIXED_RATE + } + + * Limit willingness-to-pay + foreach time in S3 { + foreach num of numlist 1/10 { + replace `time'_MPLLimit_`num' = 11 if `time'_MPLLimit_`num' == 1 + replace `time'_MPLLimit_`num' = `num' if `time'_MPLLimit_`num' == 2 + } + + * ORIGINAL VALUES + * 1 = 20 | 2 = 15 | 3 = 10 | 4 = 5 | 5 = 4 + * 6 = 3 | 7 = 2 | 8 = 1 | 9 = 0 | 10 = -1 + + egen `time'_MPLLimit = rowmin(`time'_MPLLimit_*) + recode `time'_MPLLimit /// + (1 = 25 ) /// + (2 = 17.5 ) /// + (3 = 12.5 ) /// + (4 = 7.5 ) /// + (5 = 4.5 ) /// + (6 = 3.5 ) /// + (7 = 2.5 ) /// + (8 = 1.5 ) /// + (9 = 0.5 ) /// + (10 = -0.5 ) /// + (11 = -5 ) + } + + * Willingness-to-pay for motivation + foreach time in S2 { + gen `time'_Motivation = (`time'_PredictUseInitialEarn + `time'_PredictUseBonusEarn) / 2 + replace `time'_Motivation = `time'_MPL - `time'_Motivation + } + + * Snooze + foreach time in P2 P3 P4 P5 P43 P432 P5432{ + foreach fitsby in "" "FITSBY" { + replace PD_`time'_SnoozeCount`fitsby' = 0 if PD_`time'_SnoozeCount`fitsby' == . + replace PD_`time'_SnoozeMin`fitsby' = 0 if PD_`time'_SnoozeMin`fitsby' == . + } + } + + * Installed + foreach var of varlist PD_P1_Installed_* { + replace `var' = 0 if `var' == . + } + + recode S4_PreferredSnooze /// + (1 = 0 ) /// + (2 = 1 ) /// + (3 = 2 ) /// + (4 = 3.5 ) /// + (5 = 5 ) /// + (6 = 10 ) /// + (7 = 20 ) /// + (8 = 45 ) /// + (9 = . ) /// + (10 = . ), generate(S4_PreferredSnooze_recode) + + recode S1_Income /// + (1 = 5) /// + (2 = 15) /// + (3 = 25) /// + (4 = 35) /// + (5 = 45) /// + (6 = 55) /// + (7 = 67) /// + (8 = 87.5) /// + (9 = 112.5) /// + (10 = 137.5) /// + (11 = 150) /// + (12 = .), /// + gen(balance_income) + + gen balance_college = (S1_Education >= 5) + gen balance_male = (S0_Gender == 1) + gen balance_white = (S1_Race == 5) + gen balance_age = S0_Age + gen balance_usage = PD_P1_UsageFITSBY +end + +program label_data + label var S1_PhoneUseChange "Ideal use change" + label var S3_PhoneUseChange "Ideal use change" + label var S4_PhoneUseChange "Ideal use change" + label var S1_AddictionIndex "Addiction scale x (-1)" + label var S3_AddictionIndex "Addiction scale x (-1)" + label var S4_AddictionIndex "Addiction scale x (-1)" + label var S1_LifeBetter "Phone makes life better" + label var S3_LifeBetter "Phone makes life better" + label var S4_LifeBetter "Phone makes life better" + label var S1_SMSIndex "SMS addiction scale x (-1)" + label var S3_SMSIndex "SMS addiction scale x (-1)" + label var S4_SMSIndex "SMS addiction scale x (-1)" + label var S1_SWBIndex "Subjective well-being" + label var S3_SWBIndex "Subjective well-being" + label var S4_SWBIndex "Subjective well-being" +end + +program normalize_data + foreach var in LifeBetter PhoneUseChange AddictionIndex SMSIndex SWBIndex { + sum S1_`var' if S2_Bonus == 0 & S2_LimitType == 0 + + foreach time in S1 S3 S4 { + gen `time'_`var'_N = (`time'_`var' - r(mean)) / r(sd) + local label: variable label `time'_`var' + label var `time'_`var'_N "`label' + } + } +end + +program label_index + label var S1_index_well "Welfare survey index" + label var S3_index_well "Welfare survey index" + label var S4_index_well "Welfare survey index" +end + +program normalize_index + foreach var in index_well { + sum S1_`var' if S2_Bonus == 0 & S2_LimitType == 0 + + foreach time in S1 S3 S4 { + gen `time'_`var'_N = (`time'_`var' - r(mean)) / r(sd) + local label: variable label `time'_`var' + label var `time'_`var'_N "`label' + } + } +end + +program save_data + save "output/final_data.dta", replace + + * Get analytic sample + get_analytic_sample + + * Label outcome variables + label_data + + * Normalize outcome variables + normalize_data + + * Calculate outcome indices + do "code/make_indices.do" + + * Label outcome indices + label_index + + * Normalize indices + normalize_index + + save "output/final_data_sample.dta", replace +end + +*********** +* Execute * +*********** + +main + diff --git a/17/replication_package/code/data/temptation/code/collapse_hourly.py b/17/replication_package/code/data/temptation/code/collapse_hourly.py new file mode 100644 index 0000000000000000000000000000000000000000..1d6ee954980eeba95d63c0c4ccb19791b77c2eaf --- /dev/null +++ b/17/replication_package/code/data/temptation/code/collapse_hourly.py @@ -0,0 +1,39 @@ +import pickle +import pandas as pd + +FITSBY = ['facebook', 'instagram', 'twitter', 'snapchat', 'browser', 'youtube'] + +def main(): + """ Main function to execute. + + Notes + ----- + Collapses Phone Dashboard usage data to an hourly level. + """ + + # Load data + with open('external/phone_addiction/Data/Temptation/Intermediate/PhoneDashboard/Alternative.pickle', 'rb') as f: + data = pickle.load(f) + + data = pd.DataFrame.from_dict(data) + data["HourOfDay"] = data["CreatedDatetimeHour"].apply(lambda x: x.hour) + + data = data[['AppCode', 'CreatedDate', 'HourOfDay', 'App', 'UseMinutes']] + + # Get FITSBY usage + data["UseMinutesFITSBY"] = data["UseMinutes"] + data.loc[~data['App'].isin(FITSBY), "UseMinutesFITSBY"] = 0 + + # Collapse data + data = (data.groupby(['AppCode', 'CreatedDate', 'HourOfDay']) + .sum() + .reset_index() + ) + + # Rename variables + data = data.rename(columns = {'CreatedDate': 'Date', 'HourOfDay': 'Hour'}) + + # Save data + data.to_csv('temp/hourly.csv', index = False) + +main() \ No newline at end of file diff --git a/17/replication_package/code/data/temptation/code/get_PDUsage.py b/17/replication_package/code/data/temptation/code/get_PDUsage.py new file mode 100644 index 0000000000000000000000000000000000000000..6476e19e17423761761da27780de5d77ccce457b --- /dev/null +++ b/17/replication_package/code/data/temptation/code/get_PDUsage.py @@ -0,0 +1,37 @@ +import pandas as pd +import numpy as np +import pickle + +def main(): + """ Main function to execute. + + Notes + ----- + Collects usage data on the app, Phone Dashboard + """ + + # Load data + with open('external/phone_addiction/Data/Temptation/Intermediate/PhoneDashboard/Alternative.pickle', 'rb') as f: + data = pickle.load(f) + + a = pd.DataFrame.from_dict(data) + + # Filter to just the phone dashboard usage, in relevant time periods + a_rel = a[a.App == "com.audacious_software.phone_dashboard"] + a_rel = a_rel[(a_rel.Phase == "Phase2")|(a_rel.Phase == "Phase3")|(a_rel.Phase == "Phase4")|(a_rel.Phase == "Phase5")] + + + # Aggregate + results = a_rel.groupby(["AppCode"]).agg({'UseMinutes': 'sum', + 'CreatedDatetimeHour':'count'}).reset_index() + + + # Rename for standardize merging with rest of data + results = results.rename(columns={'AppCode': 'AppCode', + 'UseMinutes': 'PD_P5432_UsageMinutesPD', + 'CreatedDatetimeHour': 'PD_P5432_UsageCountPD'}) + + # Save data + results.to_stata("temp/pd_usage.dta") + +main() diff --git a/17/replication_package/code/data/temptation/code/get_installed_apps.R b/17/replication_package/code/data/temptation/code/get_installed_apps.R new file mode 100644 index 0000000000000000000000000000000000000000..8ebe56057e32cf67dafa4addeacabd7bfecc3f24 --- /dev/null +++ b/17/replication_package/code/data/temptation/code/get_installed_apps.R @@ -0,0 +1,76 @@ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Environment +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +# Import libraries +library(tidyverse) +library(lubridate) +library(magrittr) +library(janitor) +library(rio) + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Main +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +main <- function() { + installed <- + import_installed() %>% + get_installed_apps(end_date = '2020-08-02') %>% + rename(AppCode = app_code) + + export(installed, 'temp/installed.dta') +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Data functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +import_installed <- function() { + installed <- + import('external/phone_addiction/Data/Temptation/Intermediate/PhoneDashboard/AltInstall.csv') %>% + clean_names %>% + mutate_at(vars(date), ymd) + + return(installed) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Cleaning functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +get_installed_apps <- function(installed, end_date) { + # Keep baseline only + installed %<>% + filter(date <= ymd(end_date)) + + # make fitsby app name + installed %<>% mutate(app=ifelse(fitsby=="", app, fitsby)) + + # Merge in app categories + top <- + import('raw/top_apps_cleaned.csv') %>% + select(-comments) + + installed %<>% + left_join(top, by = c('app')) + + # Reshape data + installed %<>% + select(app_code, category) %<>% + filter(is.na(category) == F & category != '') %>% + mutate(category = paste0('PD_P1_Installed_', category)) + + installed %<>% + mutate(value = 1) %>% + distinct %>% + spread(key = category, value = value, fill = 0) + + return(installed) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Execute +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +main() \ No newline at end of file diff --git a/17/replication_package/code/data/temptation/code/get_top_apps.r b/17/replication_package/code/data/temptation/code/get_top_apps.r new file mode 100644 index 0000000000000000000000000000000000000000..4d3a1f4aab31ca9ca1580ded2c066faf68cba058 --- /dev/null +++ b/17/replication_package/code/data/temptation/code/get_top_apps.r @@ -0,0 +1,75 @@ +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Environment +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +# Import libraries +library(tidyverse) +library(magrittr) +library(janitor) +library(scales) +library(rio) + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Main +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +main <- function() { + daily <- + import_daily() %>% + get_top_apps +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Data functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +import_daily <- function() { + daily <- + import('external/phone_addiction/Data/Temptation/Intermediate/MasterUserAppDay.csv') %>% + clean_names + + return(daily) +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Cleaning functions +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +get_top_apps <- function(daily, rank = 750) { + # Get total usage by app + total <- + daily %<>% + group_by(app) %>% + summarize_at(vars(alt_use_minutes), sum, na.rm = T) %>% + ungroup + + # Filter to top used apps + top <- + total %>% + arrange(desc(alt_use_minutes)) %>% + mutate(rank = 1:n()) %>% + mutate(perc_use = alt_use_minutes / sum(alt_use_minutes)) %>% + mutate(cum_perc_use = cumsum(perc_use)) %>% + filter(rank <= !!rank) + + # Print usage coverage + (top$cum_perc_use * 100) %>% + max %>% + sprintf("Total app usage coverage: %1.2f%%", .) %>% + print + + # Export top used apps + top %>% + select(app) %>% + write_csv('temp/top_apps.csv') +} + +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +# Execute +# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +main() + +# Next steps: +# 1) Manually categorize apps in `temp/top_apps.csv` +# 2) Save categorized apps as `raw/top_apps_cleaned.csv` \ No newline at end of file diff --git a/17/replication_package/code/data/temptation/code/make_indices.do b/17/replication_package/code/data/temptation/code/make_indices.do new file mode 100644 index 0000000000000000000000000000000000000000..4af721d738b66fc345b8a143a87f785c914ce24b --- /dev/null +++ b/17/replication_package/code/data/temptation/code/make_indices.do @@ -0,0 +1,89 @@ +set matsize 11000 + +cap drop S1_WellBeingAlt_3 S1_WellBeingAlt_4 S1_WellBeingAlt_6 +gen S1_WellBeingAlt_3 = -1 * S1_WellBeing_3 +gen S1_WellBeingAlt_4 = -1 * S1_WellBeing_4 +gen S1_WellBeingAlt_6 = -1 * S1_WellBeing_6 + +local varset_well S1_PhoneUseChange_N S1_AddictionIndex_N S1_SMSIndex_N S1_SWBIndex_N S1_LifeBetter_N +local varset_HSAD S1_WellBeing_1 S1_WellBeing_2 S1_WellBeingAlt_3 S1_WellBeingAlt_4 +local varset_CDS S1_WellBeing_5 S1_WellBeingAlt_6 S1_WellBeing_7 + +foreach vset in well HSAD CDS { // Loop through all outcome families + + * Get number of variables + local k: word count `varset_`vset'' + + * Initiate covariance matrix + matrix cov = J(`k', `k', .) + + * Get pairwise covariances + local i = 1 + + foreach var1 in `varset_`vset'' { + local j = 1 + + foreach var2 in `varset_`vset'' { + corr `var1' `var2', covariance + matrix cov[`i', `j'] = el(r(C), 1, 2) + local j = `j' + 1 + } + + local i = `i' + 1 + } + + * Move matrix to Mata + mata: cov = st_matrix("cov") + mata: cov + + * Get inverse covariance matrix + mata: invcov = invsym(cov) + mata: invcov + + * Get outcome weights + mata: weights = rowsum(invcov) + mata: weights + + * Move outcome weights to Stata + mata : st_matrix("weight", weights') + matrix list weight + svmat double weight, names(weight) + + * Fill in outcome weights + forvalues i = 1/`k' { + replace weight`i' = weight`i'[1] if weight`i' == . + } + + local varset_root "" + * Get variable roots for index construction + foreach v in `varset_`vset'' { + local root = "`v'" + local root = subinstr("`root'", "S1_", "", .) + local varset_root `varset_root' `root' + } + + * Calculate outcome index + foreach t in "S1" "S3" "S4" { + gen denom = 0 + gen num = 0 + + local i = 1 + + foreach v in `varset_root' { // Missing outcomes are excluded + cap gen `t'_`v' = . // Handling if outcome missing for everyone + replace denom = denom + weight`i' if `t'_`v' != . + replace num = num + weight`i'*`t'_`v' if `t'_`v' != . + local i = `i' + 1 + } + + gen `t'_index_`vset' = num / denom + drop num denom + } + + * Clear + drop weight* + mata: mata clear + clear matrix +} +cap drop S1_WellBeingAlt_3 S1_WellBeingAlt_4 S1_WellBeingAlt_6 + diff --git a/17/replication_package/code/data/temptation/code/translate_codebook.py b/17/replication_package/code/data/temptation/code/translate_codebook.py new file mode 100644 index 0000000000000000000000000000000000000000..a26bcbdacb06c3aa42e6c98431a019df79c2c620 --- /dev/null +++ b/17/replication_package/code/data/temptation/code/translate_codebook.py @@ -0,0 +1,306 @@ +import pandas as pd +import re + +def main(PATH = 'raw/codebook.xlsx'): + """ Main function to execute. + + Notes + ----- + Automatically generate Stata cleaning code from codebook. + """ + + codebook = pd.DataFrame() + codebook = codebook.append(pd.read_excel(PATH, sheet_name = 'Qualtrics')) + codebook = codebook.append(pd.read_excel(PATH, sheet_name = 'Phone Dashboard')) + + create_rename_code(codebook, do_path = 'temp/rename_code.do') + create_tostring_code(codebook, do_path = 'temp/tostring_code.do') + create_sdecode_code(codebook, do_path = 'temp/sdecode_code.do') + create_replace_code(codebook, do_path = 'temp/replace_code.do') + create_destring_code(codebook, do_path = 'temp/destring_code.do') + create_label_code(codebook, do_path = 'temp/label_code.do') + create_label_var_code(codebook, do_path = 'temp/label_var_code.do') + +def create_rename_code(codebook, do_path): + """ Create rename code. + + Parameters + ---------- + codebook : pd.DataFrame + Codebook to generate Stata code from. + do_path : str + Do-file path to write Stata code. + + Notes + ----- + - Generates Stata code renaming variables from column `Original name` to + column `Variable name`. + """ + + # Process codebook + codebook = codebook.copy() + codebook = (codebook.dropna(subset = ['Variable name']) + .set_index(['Variable name']) + .to_dict() + ) + codebook = codebook['Original name'] + + # Generate code + rename = 'rename %s %s' + rename_code = [rename % (v, k) for k, v in codebook.items()] + rename_code = '\n'.join(rename_code) + '\n' + + # Save code + with open(do_path, 'w') as f: + f.write(rename_code) + +def create_tostring_code(codebook, do_path): + """ Create tostring code. + + Parameters + ---------- + codebook : pd.DataFrame + Codebook to generate Stata code from. + do_path : str + Do-file path to write Stata code. + + Notes + ----- + - Generates Stata code converting variables in column `Variable name` + to string. + - The `capture` prefix is used as some variables are already in string + format in the raw data or are numeric with value labels. + """ + + # Process codebook + codebook = codebook.copy() + codebook = codebook.dropna(subset = ['Variable name', 'Values']) + + # Process values + codebook['Values'] = codebook['Values'].str.split(" = ") + codebook['Values'] = codebook['Values'].apply(lambda x: ['"%s"' % x[0], '"%s"' % x[1]]) + + # Generate code + tostring = 'capture tostring %s, replace' + + tostring_code = list(codebook['Variable name']) + tostring_code = [tostring % var for var in tostring_code] + tostring_code = '\n'.join(tostring_code) + '\n' + + # Save code + with open(do_path, 'w') as f: + f.write(tostring_code) + +def create_sdecode_code(codebook, do_path): + """ Create sdecode code. + + Parameters + ---------- + codebook : pd.DataFrame + Codebook to generate Stata code from. + do_path : str + Do-file path to write Stata code. + + Notes + ----- + - Generates Stata code decoding variables in column `Variable name` + to string. + - Numeric variables with value labels cannot be converted to string in + Stata; they must be decoded. + - `sdecode` is an enhanced version of `decode`. + See [here](http://fmwww.bc.edu/RePEc/bocode/s/sdecode.html). + - The `capture` prefix is used as some variables are already in string + format in the raw data. + """ + + # Process codebook + codebook = codebook.copy() + codebook = codebook.dropna(subset = ['Variable name', 'Values']) + + # Process values + codebook['Values'] = codebook['Values'].str.split(" = ") + codebook['Values'] = codebook['Values'].apply(lambda x: ['"%s"' % x[0], '"%s"' % x[1]]) + + # Generate code + sdecode = 'capture sdecode %s, replace' + + sdecode_code = list(codebook['Variable name']) + sdecode_code = [sdecode % var for var in sdecode_code] + sdecode_code = '\n'.join(sdecode_code) + '\n' + + # Save code + with open(do_path, 'w') as f: + f.write(sdecode_code) + +def create_replace_code(codebook, do_path): + """ Create replace code. + + Parameters + ---------- + codebook : pd.DataFrame + Codebook to generate Stata code from. + do_path : str + Do-file path to write Stata code. + + Notes + ----- + - Generates code recoding variables in column `Variable name` according to + the mapping in column `Values`. + - The right-hand side of `Values` is the value to be replaced. + The left-hand side of `Values` is the value to use for replacement. + For example, `1 = Yes` will be parsed as replace `Yes` with `1`. + - All text in brackets are ignored. For example, `1 = 1 [Yes]` will be + parsed as replace `1` with `1`. + - `All other options` includes all non-missing, non-binary values. For + example, `0 = All other options` will be parsed as replace all values that + are not `0`, `1`, or empty as `0`. + """ + + # Process codebook + codebook = codebook.copy() + codebook[['Variable name']] = codebook[['Variable name']].fillna(method = 'ffill') + codebook = codebook.dropna(subset = ['Values']) + + # Process values + codebook['Values'] = codebook['Values'].str.split(" = ") + codebook['Values'] = codebook['Values'].apply(lambda x: ['"%s"' % x[0], '"%s"' % x[1]]) + + # Generate code + replace_code = [] + + for index, row in codebook.iterrows(): + var = row['Variable name'] + values = row['Values'] + + # Remove bracketed comments + if re.search(' \[.*\]', values[1]): + values[1] = re.sub(' \[.*\]', '', values[1]) + + # Determine replace code + if re.match('"All other options"', values[1]): + replace = 'replace %s = %s if !inlist(%s, "0", "1", "")' + replace_code.append(replace % (var, values[0], var)) + else: + replace = 'replace %s = %s if %s == %s' + replace_code.append(replace % (var, values[0], var, values[1])) + + replace_code = '\n'.join(replace_code) + '\n' + + # Save code + with open(do_path, 'w') as f: + f.write(replace_code) + +def create_destring_code(codebook, do_path): + """ Create tostring code. + + Parameters + ---------- + codebook : pd.DataFrame + Codebook to generate Stata code from. + do_path : str + Do-file path to write Stata code. + + Notes + ----- + - Generates Stata code converting variables in column `Variable name` + to numeric. + """ + + # Process codebook + codebook = codebook.copy() + codebook = codebook.dropna(subset = ['Variable name', 'Values']) + + # Process values + codebook['Values'] = codebook['Values'].str.split(" = ") + codebook['Values'] = codebook['Values'].apply(lambda x: ['"%s"' % x[0], '"%s"' % x[1]]) + + # Generate code + destring = 'destring %s, replace' + + destring_code = list(codebook['Variable name']) + destring_code = [destring % var for var in destring_code] + destring_code = '\n'.join(destring_code) + '\n' + + # Save code + with open(do_path, 'w') as f: + f.write(destring_code) + +def create_label_code(codebook, do_path): + """ Create variable label code. + + Parameters + ---------- + codebook : pd.DataFrame + Codebook to generate Stata code from. + do_path : str + Do-file path to write Stata code. + + Notes + ----- + - Generates Stata code variable labeling variables in column `Variable name` + with column `Variable label`. + """ + + # Process codebook + codebook = codebook.copy() + codebook = (codebook.dropna(subset = ['Variable name']) + .set_index(['Variable name']) + .to_dict() + ) + codebook = codebook['Variable label'] + + # Generate code + label = 'label var %s "%s"' + label_code = [label % (k, v.strip()) for k, v in codebook.items()] + label_code = '\n'.join(label_code) + '\n' + + with open(do_path, 'w') as f: + f.write(label_code) + +def create_label_var_code(codebook, do_path): + """ Create value label code. + + Parameters + ---------- + codebook : pd.DataFrame + Codebook to generate Stata code from. + do_path : str + Do-file path to write Stata code. + + Notes + ----- + - Generates code value labeling variables in column `Variable name` + according to the mapping in column `Values`. + - The left-hand side of `Values` is the actual value of the variable. + The right-hand side of `Values` is the value to use for value labeling. + For example, `1 = Yes` will be parsed as label `1` with `Yes`. + - All text in brackets are ignored. For example, `1 = 1 [Yes]` will be + parsed as label `1` with `1`. + """ + + # Process codebook + codebook = codebook.copy() + codebook[['Variable name']] = codebook[['Variable name']].fillna(method = 'ffill') + codebook = codebook.dropna(subset = ['Values']) + + # Process values + codebook['Values'] = codebook['Values'].apply(lambda x: re.sub(' \[.*\]', '', x)) + codebook['Values'] = codebook['Values'].str.split(" = ") + codebook['Values'] = codebook['Values'].apply(lambda x: [x[0], '"%s"' % x[1]]) + codebook = codebook.groupby(['Variable name']).agg({'Values': sum}) + codebook = codebook.to_dict()['Values'] + + # Generate code + define = 'label define %s %s, replace' + define_code = [define % (k, ' '.join(v)) for k, v in codebook.items()] + define_code = '\n'.join(define_code) + + values = 'label values %s %s' + values_code = [values % (k, k) for k, v in codebook.items()] + values_code = '\n'.join(values_code) + '\n' + + # Save code + with open(do_path, 'w') as f: + f.write(define_code + '\n\n' + values_code) + +main() \ No newline at end of file diff --git a/17/replication_package/code/data/temptation/external.txt b/17/replication_package/code/data/temptation/external.txt new file mode 100644 index 0000000000000000000000000000000000000000..d4867a61f7c1fafaf014614015a21db16ad11f41 --- /dev/null +++ b/17/replication_package/code/data/temptation/external.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9050343cedbecb71c95564c56be746870685bc4d2e3bebd5b3640ba6259fb1ad +size 726 diff --git a/17/replication_package/code/data/temptation/input.txt b/17/replication_package/code/data/temptation/input.txt new file mode 100644 index 0000000000000000000000000000000000000000..1cd03ab15e422e485ab1d293e9ee3e7988480a36 --- /dev/null +++ b/17/replication_package/code/data/temptation/input.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:f8b5f4a371b96a40347130b3c0e028ed8ea56518913cb41afc5f6f9d33097d84 +size 710 diff --git a/17/replication_package/code/data/temptation/make.py b/17/replication_package/code/data/temptation/make.py new file mode 100644 index 0000000000000000000000000000000000000000..32f75fa614fb26bad784c3327f9b37dd18b34521 --- /dev/null +++ b/17/replication_package/code/data/temptation/make.py @@ -0,0 +1,65 @@ +################### +### ENVIRONMENT ### +################### +import git +import imp +import os + +### SET DEFAULT PATHS +ROOT ='../..' + +PATHS = { + 'root' : ROOT, + 'lib' : os.path.join(ROOT, 'lib'), + 'config' : os.path.join(ROOT, 'config.yaml'), + 'config_user' : os.path.join(ROOT, 'config_user.yaml'), + 'input_dir' : 'input', + 'external_dir' : 'external', + 'output_dir' : 'output', + 'output_local_dir' : 'output_local', + 'makelog' : 'log/make.log', + 'output_statslog' : 'log/output_stats.log', + 'source_maplog' : 'log/source_map.log', + 'source_statslog' : 'log/source_stats.log', +} + +### LOAD GSLAB MAKE +f, path, desc = imp.find_module('gslab_make', [PATHS['lib']]) +gs = imp.load_module('gslab_make', f, path, desc) + +### LOAD CONFIG USER +PATHS = gs.update_paths(PATHS) +gs.update_executables(PATHS) + +############ +### MAKE ### +############ + +### START MAKE +gs.remove_dir(['input', 'external']) +gs.clear_dir(['output', 'log', 'temp']) +gs.start_makelog(PATHS) + +### GET INPUT FILES +inputs = gs.link_inputs(PATHS, ['input.txt']) +externals = gs.link_externals(PATHS, ['external.txt']) +# gs.write_source_logs(PATHS, inputs + externals) +# gs.get_modified_sources(PATHS, inputs + externals) + +### RUN SCRIPTS +gs.run_r(PATHS, program = 'code/get_top_apps.r') +gs.run_r(PATHS, program = 'code/get_installed_apps.r') +gs.run_python(PATHS, program = 'code/collapse_hourly.py') +gs.run_python(PATHS, program = 'code/get_PDUsage.py') +gs.run_r(PATHS, program = 'code/aggregate_dashboard.r') +gs.run_python(PATHS, program = 'code/translate_codebook.py') +gs.run_stata(PATHS, program = 'code/clean_data.do') + +### LOG OUTPUTS +gs.log_files_in_output(PATHS) + +### CHECK FILE SIZES +gs.check_module_size(PATHS) + +### END MAKE +gs.end_makelog(PATHS) diff --git a/17/replication_package/code/data/temptation/raw/codebook.xlsx b/17/replication_package/code/data/temptation/raw/codebook.xlsx new file mode 100644 index 0000000000000000000000000000000000000000..e61a2a0be579721f6bd5cdda6848e7983d7c1045 --- /dev/null +++ b/17/replication_package/code/data/temptation/raw/codebook.xlsx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e7c741990bd21eb9bab76643df657fff0f7bfc2c8500f64325178d371127a16e +size 63101 diff --git a/17/replication_package/code/data/temptation/raw/top_apps_cleaned.csv b/17/replication_package/code/data/temptation/raw/top_apps_cleaned.csv new file mode 100644 index 0000000000000000000000000000000000000000..1ffdd8c7d53d6e6678ac1d8e734c9ea06e7fd1c3 --- /dev/null +++ b/17/replication_package/code/data/temptation/raw/top_apps_cleaned.csv @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:b1d0ba6bc03c11d3f338d9c6d49a7d9e14536eb1587e6722bf2f8fa44c5bc2d9 +size 27668 diff --git a/17/replication_package/code/docs/DescriptionOfSteps.pdf b/17/replication_package/code/docs/DescriptionOfSteps.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e119da124e9899c8a40c3476980d273ed43d1efc --- /dev/null +++ b/17/replication_package/code/docs/DescriptionOfSteps.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:12a3a39df80b2988af1482be8207d837e8d82369ed2dec4d2bdab7c76acbebb6 +size 68317 diff --git a/17/replication_package/code/docs/MappingsTablesAndFigures.pdf b/17/replication_package/code/docs/MappingsTablesAndFigures.pdf new file mode 100644 index 0000000000000000000000000000000000000000..891be9d0b6023ddc151a820fdd52eefc2bac499a --- /dev/null +++ b/17/replication_package/code/docs/MappingsTablesAndFigures.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ff2df15fc3fa2b9efe99d3c7b73727c4255ffcfe48e24092ec81168d88b5b062 +size 50145 diff --git a/17/replication_package/code/docs/Step1_Step2_DAG.pdf b/17/replication_package/code/docs/Step1_Step2_DAG.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e54ff4a5af024bd827e8abe90cc3e4e02be8fe24 --- /dev/null +++ b/17/replication_package/code/docs/Step1_Step2_DAG.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:3e2211b366e66c0c291b2bbd774a21c478ffa27085d8ff05d3e2316dc6fe62f7 +size 28536 diff --git a/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots1.pdf b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots1.pdf new file mode 100644 index 0000000000000000000000000000000000000000..419f39699f82d59f30fa83dfa5e7a3dfb1196a20 --- /dev/null +++ b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots1.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:36504b600af914ac6ce980bea6e20a4eae3086f773321b227f42868f0c8f7eec +size 557446 diff --git a/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots2.pdf b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots2.pdf new file mode 100644 index 0000000000000000000000000000000000000000..e2c13a202b4817e5c426ed7f1d51ec68fd0345cc --- /dev/null +++ b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots2.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9e9df1d08e3caa40e841f79a30e1ebc3359cba179b69816777f2abfa8931025e +size 534912 diff --git a/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots2_X.pdf b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots2_X.pdf new file mode 100644 index 0000000000000000000000000000000000000000..00b49323a259116ba01b43c181e5aaa57d8904c3 --- /dev/null +++ b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots2_X.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:22d030a895ee3eb45055e795281385ddbd25a9d17452af6dc79cebd317914505 +size 517674 diff --git a/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots_limit.pdf b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots_limit.pdf new file mode 100644 index 0000000000000000000000000000000000000000..07a1faa8fa6bef1e8e03fee2874f9d7253560eaf --- /dev/null +++ b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots_limit.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:e455be839445cfbc08f99bc4c8d3bba74f625d5d84fc3a8d2ea4b7df43aed977 +size 557628 diff --git a/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots_tracking.pdf b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots_tracking.pdf new file mode 100644 index 0000000000000000000000000000000000000000..58516607e7af28bedce55928942b23060d20a469 --- /dev/null +++ b/17/replication_package/code/experiment_design/AppScreenshots/PDScreenshots_tracking.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6be627a100767421518ead2369356df32cb561fde0b3ee7022363a464b98b746 +size 557626 diff --git a/17/replication_package/code/experiment_design/Recruitment.pdf b/17/replication_package/code/experiment_design/Recruitment.pdf new file mode 100644 index 0000000000000000000000000000000000000000..c2dea813e01e135de15382ed0981b3028b190af8 --- /dev/null +++ b/17/replication_package/code/experiment_design/Recruitment.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:07cc38d59d757ebe97a315108efe539dc3a382515c77fab256f933fdcba42e34 +size 229863 diff --git a/17/replication_package/code/experiment_design/Survey1_Baseline.pdf b/17/replication_package/code/experiment_design/Survey1_Baseline.pdf new file mode 100644 index 0000000000000000000000000000000000000000..960bc3eba0da7e59bde489c094828a280e6b8c64 --- /dev/null +++ b/17/replication_package/code/experiment_design/Survey1_Baseline.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:d6afa15d48432b44b701a3b422ef8e81ee89c93b64ce2aa6d353c4e17b9b567e +size 228577 diff --git a/17/replication_package/code/experiment_design/Survey2_Midline.pdf b/17/replication_package/code/experiment_design/Survey2_Midline.pdf new file mode 100644 index 0000000000000000000000000000000000000000..22b1d40fdcbf7c47f1223c11a9f0644a14b1bf56 --- /dev/null +++ b/17/replication_package/code/experiment_design/Survey2_Midline.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:cf8f8d280e21be9a7d5fcfc9f4a7df63f65a9ed8944d556f8e701676ea30b445 +size 181869 diff --git a/17/replication_package/code/experiment_design/Survey3_Endline.pdf b/17/replication_package/code/experiment_design/Survey3_Endline.pdf new file mode 100644 index 0000000000000000000000000000000000000000..d559396960c272eb2871f4991557d3748718d915 --- /dev/null +++ b/17/replication_package/code/experiment_design/Survey3_Endline.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:ae3755eef4877b22fa604d7c5474733b5dd2d8e728c298bd7fde98e3c687ce4e +size 203642 diff --git a/17/replication_package/code/experiment_design/Survey4_Endline.pdf b/17/replication_package/code/experiment_design/Survey4_Endline.pdf new file mode 100644 index 0000000000000000000000000000000000000000..f1088b152eb434a0ea30b60bd89c97102d9a0c54 --- /dev/null +++ b/17/replication_package/code/experiment_design/Survey4_Endline.pdf @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6e26aab2df295950c0332fbfe7ffb5eca7b0fb2d7484161c8793892073debcd8 +size 205404 diff --git a/17/replication_package/code/lib/__init__.py b/17/replication_package/code/lib/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/lib/data_helpers/__init__.py b/17/replication_package/code/lib/data_helpers/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/lib/data_helpers/builder_utils.py b/17/replication_package/code/lib/data_helpers/builder_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..95b8c7ea1eb8d256f31a4ce34898b5547ddd6f84 --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/builder_utils.py @@ -0,0 +1,221 @@ +import pandas as pd +import os +import shutil +import re +from functools import reduce +from datetime import datetime, timedelta + +from lib.experiment_specs import study_config +from lib.data_helpers import data_utils + +"""loads the phone data config from the provided config path""" + +class BuilderUtils(): + + def get_config(self, config_path): + if os.path.isfile(config_path): + pd_config_df = pd.read_csv(config_path,index_col= "index") + pd_config_dict = pd_config_df.to_dict(orient = 'index') + return pd_config_dict + else: + return {} + + """ + - Purpose: transports zipped files from PhoneDashboardPort and PCPort to the PhoneAddictionDropbox to the specified directory + - Inputs: + - port: specifies location of the port + - keyword: specifies the kind of inport from the source (e.g. budget, use, etc). the keyword must be in the file name for the function to work + - new_directory: the directory where the files will be transported + - """ + def transport_new_zip_files(self,port,keyword,new_directory): + new_adds = [] + added_files = os.listdir(new_directory) + empty_files_dir = os.listdir(os.path.join("data","external","input","PhoneDashboard","BuggyFiles","Empty")) + for zipfile in os.listdir(port): + + if ".zip" not in zipfile: + continue + + # if "UseIndiv" nearly exactly do process as "Use" + if keyword == "UseIndiv": + keyword = "Use" + + # change zipfile name for pd use data + if ("full" in zipfile) & (keyword == "Use"): + new_zipfile = zipfile.replace("full","use") + os.rename(os.path.join(port, zipfile), os.path.join(port, new_zipfile)) + zipfile = new_zipfile + + # change zipfile name for pd custom delay data, as soon as possible + if ("snooze_delays" in zipfile): + new_zipfile = zipfile.replace("snooze_","") + os.rename(os.path.join(port, zipfile), os.path.join(port, new_zipfile)) + zipfile = new_zipfile + + if (keyword.lower() not in zipfile) and (keyword.upper() not in zipfile): + continue + + #if it already exists, skip + if zipfile in added_files: + continue + + #if in the empty or corrupt directory in PA dropbox, also place it in empty or corrupt dir in port + if zipfile in empty_files_dir: + try: + old_file = os.path.join(port, zipfile) + new_file = os.path.join(port, "Empty", zipfile) + os.rename(old_file, new_file) + except: + print(f"{zipfile}couldn't move zipfile to PDPort/Empty") + continue + + + #if out of date range, skip + match = re.search(r'\d{4}-\d{2}-\d{2}', zipfile) + zip_date = datetime.strptime(match.group(), '%Y-%m-%d') + if zip_date <= study_config.first_pull or zip_date >= study_config.last_pull: + continue + + #else, copy and transfer it + else: + old_file_path = os.path.join(port,zipfile) + new_file_path = os.path.join(new_directory,zipfile) + new_adds.append(zipfile) + shutil.copy(old_file_path,new_file_path) + print(new_adds) + return new_adds + + """ updates the existing config by adding the new config entries, and saves the updated config""" + def update_config(self,existing,new,config_path): + existing.update(new) + pd_config_df = pd.DataFrame.from_dict(existing, orient='index').reset_index() + pd_config_df.to_csv(config_path, index=False) + + + """Default raw data processor invoked by event_puller.py""" + @staticmethod + def default_puller_process(df: pd.DataFrame, zip_file: str, event_puller): + for time_col in event_puller.time_cols: + df = data_utils.clean_iso_dates(df, time_col, keep_nan=False, orig_tz=event_puller.raw_timezone) + df = df.drop(columns=[time_col + "Date", time_col + "DatetimeHour", time_col + "EasternDatetimeHour"]) + df = df.rename(columns={time_col + "Datetime": time_col}) + + if "TimeZone" in df.columns: + df = df.drop(columns=["TimeZone"]) + + match = re.search(r'\d{4}-\d{2}-\d{2}', zip_file) + df["AsOf"] = datetime.strptime(match.group(), '%Y-%m-%d') + df["AsOf"] = df["AsOf"].apply(lambda x: x.date()) + return df + + # add phase column to each obs based study_config survey start times + # start_buffer =1 means that days will be counted the day after the survey start + # end_buffer = -1 means that the days will be counted the day before the survey start + @staticmethod + def add_phase_label(raw_df, raw_df_date, start_buffer=1, end_buffer=-1): + df = raw_df.copy() + if "Phase" in df.columns.values: + df = df.drop(columns="Phase") + + for phase, specs in study_config.phases.items(): + # label use with phases if we're a day into a phase + if datetime.now() > specs["StartSurvey"]["Start"] + timedelta(1): + start_date = (study_config.phases[phase]["StartSurvey"]["Start"] + timedelta(start_buffer)).date() + end_date = (study_config.phases[phase]["EndSurvey"]["Start"] + timedelta(end_buffer)).date() + df.loc[(df[raw_df_date] >= start_date) & (df[raw_df_date] <= end_date), "Phase"] = phase + + df["Phase"] = df["Phase"].astype('category') + return df + + """ + Purpose: Iterates through a subsets dict and creates new avg daily use columns + + One key-value pair of a subset dict: + + "PCSC" : { + "Filters": {"SCBool":[True]}, + "DenomCol": "DaysWithUse"}, + + """ + @staticmethod + def get_subsets_avg_use(df_p, subsets: dict): + subset_dfs = [] + for label, specs in subsets.items(): + filters = specs["Filters"] + denom_col = specs["DenomCol"] + num_cols = specs["NumCols"] + subset_df = BuilderUtils.subset_avg_use(df_p, label, filters, denom_col,num_cols) + subset_dfs.append(subset_df) + df_merged = reduce(lambda x, y: pd.merge(x, y, on='AppCode', how = 'outer'), subset_dfs) + + # If they are in this df, then they recorded some use in the phase, so we convert all of their nan's + # (i.e. for a specfic subset) in the df to 0 + df_merged = df_merged.fillna(0) + return df_merged + + """ + Input: + - df: the event level df in the given phase + - label: the variable label + - specs: {variables to subset on: values of variables to keep} + - denom_col: the column name of the variable in the df which contains the denomenator value + - if == "NAN", the function will create it's own denomenator equal to days for which there is non-zero use for + the given subset + - num_cols: list of columns to sum over (often it's just [Use], but it can be [Checks,Pickups,Use] + """ + @staticmethod + def subset_avg_use(df: pd.DataFrame, label: str, filters: dict, denom_col: str, num_cols: list): + # if we don't want to subset the phase data at all + if len(filters) == 0: + pass + + # go through each filter (note that at all filters for each variable must be met) + else: + for var, keep_vals in filters.items(): + df = df.loc[df[var].isin(keep_vals),:] + + for col in [denom_col]+[num_cols]: + df[col] = df[col].fillna(0) + + sum_df = df.groupby(by=['AppCode',denom_col], as_index=False)[num_cols].sum() + + sum_dfs = [] + for num_col in num_cols: + sum_df = sum_df.rename(columns={num_col: f"{label}{num_col}Total"}) + sum_df[f"{label}{num_col}Total"] = sum_df[f"{label}{num_col}Total"].round(0) + sum_df[f"{label}{num_col}"] = (sum_df[f"{label}{num_col}Total"] / (sum_df[denom_col])).round(0) + sum_dfs.append(sum_df[["AppCode", f"{label}{num_col}", f"{label}{num_col}Total"]]) + final = reduce(lambda df1, df2: pd.merge(df1, df2, on='AppCode', how = 'outer'), sum_dfs) + return final + + # add phase column to each obs based on time they completed the survey, indicating what phase they are in at the timestamp + # start_buffer =1 means that days will be counted the day after the survey start + # end_buffer = -1 means that the days will be counted the day before the survey start + @staticmethod + def add_personal_phase_label(raw_df, raw_master, raw_df_date, start_buffer=1, end_buffer=-1, drop_bool=True): + df = raw_df.copy() + if "Phase" in df.columns.values: + df = df.drop(columns="Phase") + + for phase, specs in study_config.phases.items(): + + # label use with phases if we're a day into a phase + if datetime.now() > specs["StartSurvey"]["Start"] + timedelta(1): + + raw_master = data_utils.inpute_missing_survey_datetimes(raw_master, phase) + old_code = study_config.phases[phase]["StartSurvey"]["Code"] + new_code = study_config.phases[phase]["EndSurvey"]["Code"] + start_col = f"{old_code}_SurveyEndDatetime" + end_col = f"{new_code}_SurveyStartDatetime" + + df = df.merge(raw_master[["AppCode", start_col, end_col]], on="AppCode", how="inner") + for col in [start_col, end_col]: + df[col] = pd.to_datetime(df[col], infer_datetime_format=True).apply(lambda x: x.date()) + + df.loc[(df[raw_df_date] >= df[start_col].apply(lambda x: x + timedelta(start_buffer))) + & (df[raw_df_date] <= df[end_col].apply(lambda x: x + timedelta(end_buffer))), "Phase"] = phase + + if drop_bool: + df = df.drop(columns=[start_col, end_col]) + df["Phase"] = df["Phase"].astype('category') + return df diff --git a/17/replication_package/code/lib/data_helpers/clean_events.py b/17/replication_package/code/lib/data_helpers/clean_events.py new file mode 100644 index 0000000000000000000000000000000000000000..df26a3a50e880c75972db911f5936c217eb74d69 --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/clean_events.py @@ -0,0 +1,106 @@ +import sys + + +from lib.experiment_specs import study_config +from lib.data_helpers import data_utils +from lib.data_helpers import test +from lib.utilities import codebook +from lib.utilities import serialize +from functools import reduce +import pandas as pd +import os +from datetime import datetime, timedelta +from lib.data_helpers.builder_utils import BuilderUtils + +""" +Object that cleans phone dashboard and PC Dashbaord data, by doing the following: + +""" + +class CleanEvents(): + + def __init__(self, source: str, keyword: str): + """ + establishes a bunch of paths + + Parameters + ---------- + source: either "PhoneDashboard" or "PCDashboard" + keyword: the kind of [PhoneDasboard] data, like "Use" or "Alternative + """ + self.user_event_file = os.path.join("data","external", "intermediate", source, f"{keyword}Summary.csv") + self.clean_file = os.path.join("data", "external", "intermediate", source, f"{keyword}") + + self.clean_test_file = os.path.join("data","external", "intermediate_test", source, f"{keyword}") + self.keyword = keyword + + self.config_user_dict = serialize.open_yaml("config_user.yaml") + + def clean_events(self, raw_event_df: pd.DataFrame, date_col: str, cleaner, phase_data: bool = True): + """ + - subsets data: only appcodes in study, within the study dates + - adds phase label + - creates phase level data + - applies specific cleaning functions given as input + + Parameters + ---------- + raw_event_df: a dataframe that contains raw PD or PC data + date_col: the column name that gives the date of the row, used for dividing rows into phases + cleaner: custom cleaner object + phase_data: whether or not the cleaner should create phase level data + + Returns + ------- + + """ + print(f"\t Cleaning {self.keyword} {datetime.now()}") + df_list = [] + print(len(raw_event_df)) + df_clean = cleaner.prep_clean(raw_event_df) + print(f"{len(df_clean)}: After Clean") + df_clean = df_clean.loc[(df_clean[date_col] >= study_config.first_pull.date()) & (df_clean[date_col] <= study_config.last_pull.date())] + + df_clean = BuilderUtils.add_phase_label(raw_df = df_clean, raw_df_date = date_col) + print(f"Length of file before saving {len(df_clean)}") + print(df_clean.memory_usage(deep=True)/1024**2) + + if self.config_user_dict["local"]["test"]: + test.save_test_df(df_clean, self.clean_test_file) + + else: + try: + serialize.save_pickle(df_clean, self.clean_file) + except: + print("Couldn't save pickle!") + + #try: + # serialize.save_hdf(df_clean, self.clean_file) + #except: + # print("Couldn't save hdf!") + + if phase_data ==True: + for phase,specs in study_config.phases.items(): + #we have to wait two days to begin collecting new phase data because phase use doesn't start untill a day + # after survey launch, and that data isn't sent to dropbox until the day after + if datetime.now() 0: + print(f" \t Duplicate Columns to delete: {df_dup.columns}") + df = df.loc[:, ~df.columns.duplicated()] + return df + + def _process_col_values(self, df): + df = df.astype(str).applymap(lambda x: x.strip()) + + for p_col in ["PhoneNumber","PhoneNumberConfirm","FriendContact"]: + if p_col in df.columns.values: + df[p_col] = df[p_col].apply(lambda x: re.sub("[^0-9]", "", str(x))) + df[p_col] = df[p_col].apply(lambda x: "1"+x if len(x)==10 else x) + df[p_col] = df[p_col].apply(lambda x: x if len(x) == 11 else 'nan') + + #silly bug + if self.survey_name == "Baseline": + print("\t dealing with dumb baseline bug to ensure appcode assertion passes") + print(f"\t Len before dropping sherry {len(df)}") + df = df.loc[df["MainEmail"]!="xy1087@nyu.edu"] + print(f"\t len after dropping sherry {len(df)}") + + # ADD APPCODES to survey without appcode + if "AppCode" not in df.columns.values: + print("\t No AppCode in Survey") + # make the main email col in pii the raw email col in the survey w/o appcode + pii = serialize.open_pickle(self.pii_path).reset_index().rename(columns = {"index":"AppCode"}) + email_col = study_config.surveys[self.survey_name]["RawEmailCol"] + pii = pii.loc[pii["MainEmail"] != "nan",["AppCode", "MainEmail"]].rename(columns = {"MainEmail":email_col}) + + df = df.merge(pii, on=email_col, how='left') + print("done merger") + + for appcode_col in ["AppCode", "AppCodeConfirm"]: + if appcode_col in df.columns.values: + df = data_utils.add_A_to_appcode(df,appcode_col) + "If no appcode, assign 'UNASSIGNED_' + ResponseID as Appcode" + df.loc[df[appcode_col]=="nan", appcode_col] = 'UNASSIGNED_'+df["ResponseID"] + + if "AppCode" not in df.columns: + print("AppCode not in df.columns. Anonymize with Response ID") + df["AppCode"] = 'UNASSIGNED_'+df["ResponseID"] + + #Temptation Stratification for SurveyChecks + if self.survey_name =="Recruitment": + df["Age"] = df["Age"].astype(float) + df.loc[(df["Age"]>=18)&(df["Age"]<=34),"AgeStrat"] = "18-34" + df.loc[(df["Age"] >= 35) & (df["Age"] <= 50), "AgeStrat"] = "35-50" + df.loc[(df["Age"] > 50), "AgeStrat"] = "50+" + df["Age"] = df["Age"].fillna("nan").astype(str) + + + return df + + def _process_datetimes(self,df): + df['SurveyStartEasternDatetime'] = pd.to_datetime(df['SurveyStartDatetime'], infer_datetime_format=True) + df['SurveyEndEasternDatetime'] = pd.to_datetime(df['SurveyEndDatetime'], infer_datetime_format=True) + df.loc[:, 'OpenEasternDateTime'] = self.open_date_time + df.loc[:, 'CloseEasternDateTime'] = self.close_date_time + + # Create Local Datetime + try: + # Get the modal timezone for each user to adjust the easter survey times to local time of user + timezones = serialize.open_pickle(self.timezones_path) + df = df.merge(timezones, on = "AppCode", how = 'left') + for time_var in ['SurveyStartEasternDatetime','SurveyEndEasternDatetime','OpenEasternDateTime','CloseEasternDateTime']: + df[time_var.replace("Eastern","")]= df[time_var]+df["EastToLocal"] + + #if we can't find timezone, let local timezone equal eastern timezone + df.loc[df[time_var.replace("Eastern","")].isnull(),time_var.replace("Eastern","")] = df[time_var] + df.loc[df["EastToLocal"].isnull(),"EastToLocal"] = timedelta(0) + except: + print("Could not merge status exporter likely because Appcode not in survey or because timezones file not in path") + df['SurveyStartDatetime'] = pd.to_datetime(df['SurveyStartDatetime'], infer_datetime_format=True) + df['SurveyEndDatetime'] = pd.to_datetime(df['SurveyEndDatetime'], infer_datetime_format=True) + return df + + return df + + """for each possible email column, drop nan's, drop banned emails, keep last of duplicate""" + + def _filter_emails(self, df): + print(f"obs before filtering tester emails {len(df)}") + other_email_cols = ['SchoolEmail', 'PreferEmail', 'RecipientEmail', "Email", "EmailConfirm", "Email.1"] + + if not self.test: + for email_col in other_email_cols+["ParentEmail"]: + if email_col in list(df.columns.values): + df = df.loc[~df[email_col].isin(list(self.testers["Email"]))] + else: + print("Because self.test == True, all banned emails will be included!!") + + # ensure the main email col is "MainEmail" + raw_email_col = study_config.surveys[self.survey_name]["RawEmailCol"] + if raw_email_col != "MainEmail": + + # drop the current MainEmail Version, and create a new one using the raw email data + if "MainEmail" in df.columns: + df = df.drop(columns=["MainEmail"]) + df = df.rename(columns={raw_email_col: "MainEmail"}) + + # drop other email cols + for col in other_email_cols: + if col in df.columns: + df = df.drop(columns=[col]) + return df + + # creates new variable "Complete" which indicates if the participant completed the survey, or the last column they filled in + def _validate_completes(self, df): + last_question = study_config.surveys[self.survey_name]["LastQuestion"] + + # Assert the last question will not be filled by an anonymous code, regardless if the cell were empty + id_cols = list(study_config.id_cols.values()) + assert last_question not in sum(id_cols, []) + + question_index = list(df.columns).index(last_question) + survey_cols = list(df.columns)[:question_index+1:] + reverse_survey_cols = survey_cols[::-1] + + complete_q = study_config.surveys[self.survey_name]["CompleteQuestion"] + df.loc[(df["Finished"] == 'True') & (df[complete_q] != 'nan'), "Complete"] = "Complete" + df.loc[df["Complete"] != 'Complete', "Complete"] = "UnfinishedOther" + + # Find the last completed survey variable + df_dict = df.to_dict(orient = 'index') + for key, value in df_dict.copy().items(): + if value["Complete"] == "Complete": + continue + else: + for col in reverse_survey_cols: + + #exclude empty cols + if (value[col] != "nan"): + try: + #exclude cols that are automatically filled in + if "UNASSIGNED" not in str(value[col]): + df_dict[key]["Complete"] = col + break + except: + print("bug") + df = pd.DataFrame.from_dict(df_dict, orient = "index") + return df + + def _filter_duplicates(self, df): + # filter out duplicates-- prioritize keeping complete obs, then the later obs + ranks = range(0,len(df.columns)) + rank_of_cols = dict(zip(list(df.columns),ranks)) + rank_of_cols["Complete"] = 9999999 + + df["CompleteRank"] = df["Complete"].apply(lambda x: rank_of_cols[x]) + + scratch_path = os.path.join(self.intermediate_dir, "Scratch") + + for dup_col in ["MainEmail","AppCode"]: + if dup_col not in df.columns: + print(f"{self.survey_name} doesn't have an {dup_col} column") + else: + print(f"obs before dropping dups of {dup_col}: {len(df)}") + + #sort obs by, appcode, how far they completed the survey, then time survey was started + df = df.sort_values(by = [dup_col,"CompleteRank","SurveyStartDatetime"]) + + #if dup_col == "MainEmail": + # dup = df[(df.duplicated(subset=["MainEmail"], keep=False)) & (df['MainEmail'] != "nan")] + # dup.to_csv(os.path.join(scratch_path, f"RecruitDups.csv")) + + # mark all entries as duplicates, except for the last one + df = df.loc[(~df.duplicated(subset=[dup_col], keep='last')) | (df[dup_col] == "nan")] + + #if dup_col == "MainEmail": + # df.to_csv(os.path.join(scratch_path, f"Keeps.csv")) + + print(f"obs after dropping dups of {dup_col}: {len(df)}") + df = df.sort_values("SurveyStartDatetime") + return df + + def _filter_timeframe(self, df): + print(f"obs before droppingpeople that began before survey start or ended after close {len(df)}") + if self.test == False: + df = df.loc[df['SurveyStartEasternDatetime'] >= df['OpenEasternDateTime']] + df = df.loc[df['SurveyEndEasternDatetime'] <= df['CloseEasternDateTime']] + return df + + def _update_codebook(self, df): + codebook_dic = {} + for col in df.columns: + codebook_dic[col] = { + "VariableLabel": str(df.loc[0,col]), + "DataType": df.dtypes[col], + "PrefixEncoding": "Survey" + } + #Remove Timing Data from Codebook + timing_vars = ["Timing-ClickCount", + "Timing-PageSubmit", + "Timing-FirstClick", + "Timing-LastClick", + "BrowserMetaInfo"] + for varname, chars in codebook_dic.copy().items(): + if len([x for x in timing_vars if x in chars["VariableLabel"].replace(" ","")]) > 0: + del codebook_dic[varname] + + codebook.add_vardic_to_codebook(codebook_dic) + + def _remove_embedded_data(self, df_f): + + #first remove all the qualtrics tracker vars + unimportant_vars = [y for y in df_f if + any(x in y for x in ["FirstClick", "LastClick", "PageSubmit", "ClickCount"])] + df_f = df_f.drop(columns=unimportant_vars) + + #then remove the embedded data + non_survey_vars_to_keep = study_config.main_cols + [f"{self.code}_{x}" for x in study_config.kept_survey_data] + for question_type in ["FirstQuestion", "LastQuestion"]: + if question_type in study_config.surveys[self.survey_name]: + last_question = study_config.surveys[self.survey_name]["Code"] + "_" + \ + study_config.surveys[self.survey_name][question_type] + question_index = list(df_f.columns).index(last_question) + if question_type == "FirstQuestion": + keep_cols = list(set(non_survey_vars_to_keep + list(df_f.columns)[question_index:])) + else: + keep_cols = list(set(non_survey_vars_to_keep + list(df_f.columns)[:question_index + 1])) + df = df_f[[x for x in df_f.columns if x in keep_cols]] + + # re order columns (put main cols in front and maintain order of survey columns + var_order = [x for x in non_survey_vars_to_keep if x in df_f.columns] + [x for x in df_f.columns if x not in non_survey_vars_to_keep] + kept_var_order = [x for x in var_order if x in df.columns] + df = df[kept_var_order] + + # remove hyphens from df, but if var is prefixed, drop IF the non prefix var is df + """ + for var in df.columns: + if f"{self.code}_{self.code}" in var: + non_pref_var = var.replace(f'{self.code}_', '') + if var.replace(f"{self.code}_", "") in df.columns: + # if df[var] == df[var.replace(f"{self.code}_","")]: + if df.equals(df[[var, non_pref_var]]): + df = df.drop(columns=[var]) + print(f"Dropping {var} b/c {non_pref_var} and identical") + else: + test = df[[var, non_pref_var]] + print(f"Keeping {var} b/c {var} != {non_pref_var}") + else: + print(f"Keeping {var} b/c unprefixed var still in df") + """ + return df + + def _reshape_text_survey(self,df): + df["SurveyStartDate"] = df["SurveyStartDatetime"].dt.date + df = BuilderUtils.add_phase_label(raw_df = df,raw_df_date="SurveyStartDate", start_buffer=0, end_buffer=-1) + + # Replace Values of phase with the start survey code + codes = [study_config.phases[x]["StartSurvey"]["Code"] for x in list(study_config.phases.keys())] + rename_dic = dict(zip(list(study_config.phases.keys()), codes)) + df["Phase"] = df["Phase"].apply(lambda x: rename_dic[x] if x in rename_dic else x) + + keep_vars =[study_config.surveys[self.survey_name]["CompleteQuestion"],"SurveyStartDatetime","SurveyEndDatetime","Complete"] + df_p = df.pivot_table(index=["AppCode"], + values=keep_vars, + columns=["Phase"], + aggfunc='first') + + + df_p.columns = [f'_{self.code}'.join(col[::-1]).strip() for col in df_p.columns.values] + df_p = df_p.reset_index() + + return df_p \ No newline at end of file diff --git a/17/replication_package/code/lib/data_helpers/confidential.py b/17/replication_package/code/lib/data_helpers/confidential.py new file mode 100644 index 0000000000000000000000000000000000000000..10172d47b085adee547cc2f6623852fddfb34213 --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/confidential.py @@ -0,0 +1,78 @@ + +import sys +import os +import pandas as pd +from lib.experiment_specs import study_config +from lib.utilities import serialize + +""" +Class that contains functions to anonomize all PII info or de-anonymize PII columns +- all PII columns are replace with the appcode value +""" +class Confidential: + id_file = os.path.join("data","external", "dropbox_confidential","ContactLists","Generator","PII") + + """populate the PII dataframe with column values for the given survey""" + @staticmethod + def build_id_map(df, survey_name, id_file = id_file): + for survey, id_cols in study_config.id_cols.items(): + if survey in survey_name: + id_dict = serialize.soft_df_open(id_file).to_dict(orient = 'index') + id_cols = ["AppCode"] + [x for x in df.columns if x in study_config.id_cols[survey]] + + new_pii = df.loc[df["AppCode"].notnull(), id_cols] + new_pii.index = new_pii["AppCode"] + + new_pii_dict = new_pii.drop(columns = "AppCode").to_dict("index") + + if len(id_dict) ==0: + """if id dict is empty, replace with with the new data""" + id_dict = new_pii_dict.copy() + + else: + """update pii dict""" + for appcode in new_pii_dict.keys(): + + """if appcode not in the id_dict, add it""" + if appcode not in id_dict: + id_dict[appcode] = new_pii_dict[appcode] + else: + """if appcode is in the id_dict, add or update the columns""" + for col,val in new_pii_dict[appcode].items(): + """if col is not in the id_dict, add it (UNCLEAR HOW THIS WILL WORK WITH THE DELAYED SURVEY""" + id_dict[appcode][col] = val + + id_df = pd.DataFrame.from_dict(id_dict, orient= 'index') + serialize.save_pickle(id_df,id_file,test_override=True) + break + + """anonymize pii columns in df """ + @staticmethod + def anonymize_cols(df): + all_pii_cols = sum(list(study_config.id_cols.values()),[]) + for col in df.columns: + if col in all_pii_cols: + df[col] = df["AppCode"] + return df + + """ Adds PII back to data frame by replacing the values of all anonymized columns with the pii""" + @staticmethod + def add_pii(df, id_file = id_file,only_main_cols = False): + id_df = serialize.soft_df_open(id_file) + all_pii_cols = list(id_df.columns) + relevant_pii = [x for x in df.columns if x in all_pii_cols] + + # Drop Cols with the anonymized pii + df = df.drop(columns = relevant_pii) + + # Merge in the relevant pii + id_df = id_df.reset_index().rename(columns = {"index":"AppCode"}) + + if only_main_cols==True: + id_main_cols = [x for x in study_config.main_cols if x in id_df.columns] + id_df = id_df[id_main_cols] + + df_pii = id_df.merge(df, on = "AppCode", how = 'right') + return df_pii + + diff --git a/17/replication_package/code/lib/data_helpers/data_utils.py b/17/replication_package/code/lib/data_helpers/data_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..dc429f33c9a1c144898b01b7921e146d9037fe7a --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/data_utils.py @@ -0,0 +1,194 @@ +import pandas as pd +import os +import sys +import re +import pickle +import yaml +from datetime import datetime, timedelta, timezone +import dateutil.parser +import pytz + + +from lib.experiment_specs import study_config +from lib.utilities import codebook + +""" Purpose: cleans the iso datetimes in a dataframe column + -Input: + - DataFrame: data - raw input data that contains the time column + - col_name - the name of the column + - keep_nan : keep rows with empty value for df[col_name] + - orig_tz: when you remove the timezone adjustment, what is the timezone. if "local", then removing the timezone + yields the local time for the participant. + + - Output: + -dataframe with the following new columns: + - {col_name}Datetime - in the phone's local time + - {col_name}DatetimeHour + - {col_name}Date + + - {col_name}EasternDatetime - in eastern time + - {col_name}EasternDatetimeHour + """ + +def clean_iso_dates(data_raw: pd.DataFrame, col_name: str, keep_nan: bool = False, orig_tz: str = "Local"): + + data = data_raw.loc[data_raw[col_name].notnull()] + data[col_name + 'DatetimeTZ'] = data[col_name].apply(lambda x: dateutil.parser.parse(x).replace(microsecond=0)) + + # if the datetime without the timezone adjustment brings the time to local + if orig_tz == "Local": + data[col_name + 'Datetime'] = data[col_name + 'DatetimeTZ'].apply(lambda x: x.replace(tzinfo=None)) + data[col_name + 'DatetimeUTC'] = data[col_name + 'DatetimeTZ'].apply( + lambda x: x.replace(tzinfo=timezone.utc) - x.utcoffset()) + + # if the datetime without the timezone adjustment brings the time UTC + else: + data[col_name + 'Datetime'] = data[col_name + 'DatetimeTZ'].apply( + lambda x: x.replace(tzinfo=timezone.utc) + x.utcoffset()) + data[col_name + 'Datetime'] = data[col_name + 'Datetime'].apply( lambda x: x.replace(tzinfo=None)) + data[col_name + 'DatetimeUTC'] = data[col_name + 'DatetimeTZ'].apply(lambda x: x.replace(tzinfo=timezone.utc)) + + data[col_name + 'DatetimeHour'] = data[col_name + 'Datetime'].apply(lambda x: x.replace(minute=0, second=0)) + data[col_name + 'Date'] = data[col_name + 'DatetimeHour'].apply(lambda x: x.date()) + + # Create Col in Eastern Time + eastern = pytz.timezone('US/Eastern') + data[col_name + 'EasternDatetime'] = data[col_name + 'DatetimeUTC'].apply( + lambda x: x.astimezone(eastern).replace(tzinfo=None)) + data[col_name + 'EasternDatetimeHour'] = data[col_name + 'EasternDatetime'].apply( + lambda x: x.replace(minute=0, second=0)) + data = data.drop(columns=[col_name, col_name + 'DatetimeTZ', col_name + 'DatetimeUTC']) + + if keep_nan: + missing = data_raw.loc[data_raw[col_name].isnull()] + data = data.append(missing) + return data + + +"""remove data files from directory""" +def remove_files(directory): + for file in os.listdir(directory): + file_path = os.path.join(directory, file) + try: + if os.path.isfile(file_path): + os.unlink(file_path) + except Exception as e: + print(e) + + +""" This method inputs missing start and enddatetime for survey incompletes. This helps determine what to count as use in phase, +for people that have not completed their surveys""" +def inpute_missing_survey_datetimes(df, phase): + specs = study_config.phases[phase] + old_code = specs["StartSurvey"]["Code"] + new_code = specs["EndSurvey"]["Code"] + + # the missing end date will be today if the survey hasn't ended yet + missing_end_date = min(datetime.now().replace(microsecond=0), study_config.phases[phase]["EndSurvey"]["End"]) + + # if the end survey hasn't even been distributed yet, add end survey completion col artificially to df + if datetime.now() < study_config.phases[phase]["EndSurvey"]["Start"]: + df.loc[(df[f"{old_code}_Complete"] == "Complete") , f"{new_code}_SurveyStartDatetime"] = missing_end_date + + else: + # we inpute the completion of the end survey, if they completed the start survey: + df.loc[(df[f"{old_code}_Complete"] == "Complete") & + (df[f"{new_code}_SurveyStartDatetime"].isnull()), f"{new_code}_SurveyStartDatetime"] = missing_end_date + + return df + +""" Adds survey code prefix to each column in the df""" +def add_survey_code(df, code): + for col in df.columns.values: + no_prefix_cols = study_config.main_cols + study_config.embedded_main_cols + if col not in no_prefix_cols: + new_name = code + "_" + col + df = df.rename(columns={col: new_name}) + return df + + +"""A function which takes the clean_master master and outputs all the variables from a phase, without the prefixes""" +def keep_relevant_variables(df_raw, phase): + start_code = study_config.phases[phase]["StartSurvey"]["Code"] + end_code = study_config.phases[phase]["EndSurvey"]["Code"] + + """Keep participants that completed the relevant survey in the phase""" + df = df_raw.loc[df_raw[f"{start_code}_Complete"] == "Complete"].copy() + + #"""LIMIT INDEX CONSTRUCTION TO FOLKS WITH CONSISTENT USE""" + #codebook_dic = pd.read_csv(codebook.main_codebook_path, index_col="VariableName").to_dict(orient='index') + + ## get all columns in the given phase that are also in the df + #keep_cols = [codebook.add_prefix_var(x, phase, codebook_dic) for x in codebook_dic.keys()] + #keep_cols_in_df = [x for x in keep_cols if x in df.columns] + + keep_cols = [x for x in df.columns if f"{start_code}_" in x or x in study_config.main_cols+study_config.embedded_main_cols] + df = df[keep_cols] + + # drop prefixes on these columns + df.columns = [x.replace(f"{start_code}_","") for x in df.columns] + return df + +def add_A_to_appcode(df,appcode_col): + df[appcode_col] = df[appcode_col].astype(str).fillna("nan") + + #convert weird float appcodes to integers + df[appcode_col] = df[appcode_col].apply(lambda x: int(float(x)) if (x != "nan") and (x[0] != "A") else x) + + # add to those who need it + df[appcode_col] = df[appcode_col].astype(str).apply(lambda x: "A" + x if len(x) == 8 else x) + + # assert we only have nans and proper appcodes + df["Check"] = df[appcode_col].apply(lambda x: True if (len(x) == 9) or (len(x) == 3) else False) + l = df["Check"].value_counts() + l_s = df.loc[df["Check"]==False] + assert df["Check"].all() == True + return df + +"returns the latest main survey that has already ended" +def get_last_survey(): + last_complete_time = datetime(2018, 1, 1, 0, 0) + last_survey = "" + surveys = study_config.main_surveys + for survey in surveys: + chars = study_config.surveys[survey] + if chars["End"] < datetime.now(): + if chars["End"] > last_complete_time: + last_survey = survey + last_complete_time = chars["End"] + return last_survey + + +# asserts two dfs that share common appcodes and columns, within a col_list +def assert_common_appcode_values(df1, df2, col_list): + common_appcodes = set(df1["AppCode"]).intersection(set(df2["AppCode"])) + common_columns = list(set(df1.columns).intersection(set(df2.columns)).intersection(col_list)) + compare_list = [] + for df in [df1, df2]: + df = df.loc[df["AppCode"].isin(common_appcodes)] + df = df[common_columns] + df = df.sort_values(by="AppCode").reset_index(drop=True).astype(str) + compare_list.append(df) + assert len(compare_list[0]) == len(compare_list[1]) + + c = compare_list[0].merge(compare_list[1], how='outer', on='AppCode', ) + for col in compare_list[0].columns: + if col == "AppCode": + continue + try: + c[col + "_x"].equals(c[col + "_y"]) == True + except: + print(f"no match on{col}") + print(c[col + "_x"].dtype) + print(c[col + "_y"].dtype) + print("First five rows that don't match:") + print(c.loc[c[col + "_x"] != c[col + "_y"]].head()) + sys.exit() + +def merge_back_master(df_master, df_phase, phase): + """ add prefixes to a phase specific df, and merge it to master""" + codebook_dic = pd.read_csv(codebook.codebook_path, index_col="VariableName").to_dict(orient='index') + df_phase.columns = [codebook.add_prefix_var(x, phase, codebook_dic) for x in df_phase.columns] + new_cols = ["AppCode"] + list(set(df_phase.columns) - set(df_master.columns)) + df_master = df_master.merge(df_phase[new_cols], how='outer', left_on="AppCode", right_on="AppCode") + return df_master \ No newline at end of file diff --git a/17/replication_package/code/lib/data_helpers/gaming.py b/17/replication_package/code/lib/data_helpers/gaming.py new file mode 100644 index 0000000000000000000000000000000000000000..ada3b54376a3e66a57b2f01e3331e853cacb780f --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/gaming.py @@ -0,0 +1,301 @@ +import os +import pandas as pd +from datetime import timedelta,datetime + +from lib.utilities import codebook +from lib.experiment_specs import study_config + + +from lib.data_helpers.builder_utils import BuilderUtils +from lib.data_helpers import test + +from lib.utilities import serialize + +class Gaming(): + gaming_dir = os.path.join("data","external","intermediate","PhoneDashboard","Gaming") + events_file = os.path.join(gaming_dir,"Events") + first_last_file = os.path.join(gaming_dir, "FirstLast") + diagnosed_file = os.path.join(gaming_dir, "Diagnosed") + diagnosed_test_file = diagnosed_file.replace("intermediate","intermediate_test") + + good_diag = ["Phone never shut off", + "Phone shut off", + "Phone shut off, even if d1df["AppRuntime"]) & (df["PrevAppCode"]==df["AppCode"]),keep_cols] + + else: + # add suspicious obs from the the first_last df when the last reading from the previous zipfile + # has a app runtime that is greater than the app runtime for the current zipfile + ev = df.loc[(df["PrevAppRuntime"]>df["AppRuntime"]) + & (df["PrevAppCode"]==df["AppCode"]) + & (df["Sequence"]=="First") + & (df["PrevSequence"]=="Last") + & (df["PrevZipfile"]!=df["Zipfile"]) + , keep_cols] + + serialize.save_pickle(ev, os.path.join(Gaming.gaming_dir,"Granular",f"events{file}")) + + """gets the first and last observation from each raw zipfile""" + @staticmethod + def get_first_last(df,file): + first = df.groupby("AppCode").first() + first["Sequence"] = "First" + last = df.groupby("AppCode").last() + last["Sequence"] = "Last" + first_last_df = first.append(last).reset_index() + first_last_df = first_last_df[Gaming.game_cols] + serialize.save_pickle(first_last_df, os.path.join(Gaming.gaming_dir, "FirstLast", f"first_last_{file}")) + + """ assembles the events file, diagnoses gaming events, and summarizes blackouts on the user level by phase""" + @staticmethod + def process_gaming(error_margin, hour_use,raw_user_df): + + #don't run raw gaming detection pipeline during test...debug over notebooks if needed + config_user_dict = serialize.open_yaml("config_user.yaml") + if config_user_dict['local']['test']: + diag_df = serialize.open_pickle(Gaming.diagnosed_file) + + else: + Gaming._add_first_last_events() + ev_df = Gaming._aggregate_events() + diag_df = Gaming._diagnose_events(ev_df, error_margin, hour_use) + + #rehape all blackout events for main analysis + game_user_df = Gaming._reshape_events(diag_df, raw_user_df) + + #reshape screen active blackout events for side analysis + game_user_df_SA = Gaming._reshape_events(diag_df.loc[diag_df["PrevScreenActive"]==1], raw_user_df,"ActiveBlackoutsOverPhase") + + game_hour_df = Gaming._expand_gaming_df(diag_df,"GameHourDf") + + #game_hour_df_under_24 = Gaming._expand_gaming_df(diag_df.loc[diag_df["BlackoutHours"]<24], + # "GameHourDfUnder24") + + return game_user_df + + + """ aggregate the first last observations, and then scan them. + We are scanning if the last reading from the previous zipfile + has a app runtime that is greater than the app runtime for the next zipfile """ + @staticmethod + def _add_first_last_events(): + fl_dir = os.path.join(Gaming.gaming_dir, "FirstLast") + df = pd.concat([serialize.soft_df_open(os.path.join(fl_dir, x)) for x in os.listdir(fl_dir)]) + df = df.sort_values(by=["AppCode", "CreatedEasternDatetime"]).reset_index(drop=True) + if datetime.now()>study_config.surveys["Baseline"]["Start"]: + df = df.loc[df['CreatedDatetime']>study_config.surveys["Baseline"]["Start"]] + df["PrevSequence"] = df["Sequence"].shift(1) + Gaming.scan(df, "fl", first_last_bool=True) + + """aggregates all the individual events in the granular directory""" + @staticmethod + def _aggregate_events(): + ev_dir = os.path.join(Gaming.gaming_dir, "Granular") + ev_df = pd.concat([serialize.soft_df_open(os.path.join(ev_dir, x)) for x in os.listdir(ev_dir)]) + ev_df = ev_df.drop_duplicates(subset=["AppCode", "CreatedEasternDatetime"], keep='last').reset_index(drop=True) + serialize.save_pickle(ev_df, Gaming.events_file) + return ev_df + + + """ estimates the runtime of the phone when the user was not tracking + - d0: the device runtime right before pennyworth stopped recording + - d1: the device runtime when PD returned to recording + - dd: difference in phone runtime (d1 - d0) + - td: difference in the timestamps associated with d0 and d1 + - error_margin: number of hours that CreateDateTime or runtime stamps can deviate before error is flagged + - all variables have hour units + """ + + @staticmethod + def _diagnose_events(ev_df, error_margin, clean_hour_use): + df = ev_df.sort_values(by = ['AppCode','CreatedEasternDatetime']) + df = df.loc[df["PrevCreatedEasternDatetime"]>study_config.first_pull] + + if datetime.now()>study_config.surveys["Baseline"]["Start"]: + df = df.loc[df['PrevCreatedDatetime']>study_config.surveys["Baseline"]["Start"]] + + df["CreatedEasternDatetimeDiffHours"] = (df["CreatedEasternDatetime"] - df["PrevCreatedEasternDatetime"]).apply( + lambda x: round(x.days * 24 + x.seconds / (60 * 60), 2)) + + for col in ["DeviceRuntime", "AppRuntime", "PrevDeviceRuntime", "PrevAppRuntime"]: + df[f"{col}Hours"] = (df[f"{col}"] / (1000 * 60 * 60)).round(decimals=2) + + for col in ["DeviceRuntimeHours","AppRuntimeHours"]: + df[col+"Diff"] = df[col]-df[f"Prev{col}"] + + ne_dict = df.to_dict(orient='index') + day = clean_hour_use.groupby(["AppCode","CreatedDate"])["UseMinutes"].sum() + day_dic = {k: day[k].to_dict() for k, v in day.groupby(level=0)} + + for key, val in ne_dict.items(): + + d0 = val["PrevDeviceRuntimeHours"] + d1 = val["DeviceRuntimeHours"] + td = val["CreatedEasternDatetimeDiffHours"] + date0 = val["PrevCreatedDatetime"] + date1 = val["CreatedDatetime"] + + if val["AppCode"] in day_dic: + app_dic = day_dic[val["AppCode"]] + else: + #this appcode has no use :/ + app_dic = {} + + # Remove false-positives due to data export lag. + # ..i.e. drop an event if there is use in between the pings + if (date0+timedelta(days=1)).date()0 + elif d1 - d0 < 0: + # indicates that phone shutdown: + ne_dict[key]['Diagnosis'] = "Phone shut off" + ne_dict[key]['BlackoutHoursLB'] = d1 + ne_dict[key]['BlackoutHoursUB'] = td + + if td + error_margin < d1: + # Impossible, comment error + ne_dict[key]['Diagnosis'] = "ERROR: td = d1: + # if new runtime is less than or equal to time difference: phone had to have shut off, + # even if new runtime is greater than old runtime + ne_dict[key]['Diagnosis'] = f"Phone shut off, even if d1=d0 & d1>td + ne_dict[key]['Diagnosis'] = f"Phone never shut off" + ne_dict[key]['BlackoutHoursLB'] = td + ne_dict[key]['BlackoutHoursUB'] = td + + if td + error_margin < d1 - d0: + # Impossible, comment error + ne_dict[key]['Diagnosis'] = "ERROR: if phone never shutoff, no way for td < d1-d0" + + + + df = pd.DataFrame.from_dict(ne_dict, orient='index') + df["BlackoutHours"] = (df["BlackoutHoursLB"] + df["BlackoutHoursUB"])/2 + df = Gaming._diagnose_dups(df) + serialize.save_pickle(df, Gaming.diagnosed_file) + test.save_test_df(df,Gaming.diagnosed_test_file) + + return df + + @staticmethod + def _diagnose_dups(df): + df = df.sort_values(by=["AppCode", "PrevCreatedEasternDatetime"]).reset_index(drop=True) + d_dict = df.to_dict(orient='index') + for key, val in d_dict.items(): + if key + 1 not in d_dict: + continue + + if d_dict[key]["AppCode"] != d_dict[key + 1]["AppCode"]: + continue + + if d_dict[key]["CreatedEasternDatetime"] > d_dict[key + 1]["PrevCreatedEasternDatetime"]: + d_dict[key]["Diagnosis"] = "Error: Another event starts before this event ends" + + #put an error on the other event if it is NOT embedded in the original event + if d_dict[key]["CreatedEasternDatetime"] < d_dict[key + 1]["CreatedEasternDatetime"]: + d_dict[key + 1]["Diagnosis"] = "Error: Another event ends after this event starts" + + df = pd.DataFrame.from_dict(d_dict, orient='index') + return df + + """ + Input: takes the diagnosed event level dataframe + Output: User level df that contains the total blackout period time by phase + """ + @staticmethod + def _reshape_events(diag_df,raw_user,file_name = None): + df = diag_df.loc[diag_df["Diagnosis"].isin(Gaming.good_diag)] + df["CreatedDate"] = df["CreatedDatetime"].apply(lambda x: x.date()) + df = BuilderUtils.add_phase_label(df, + raw_df_date = "CreatedDate", + start_buffer = 0, + end_buffer = -1,) + + # Replace Values of phase with the start survey code + codes = [study_config.phases[x]["StartSurvey"]["Code"] for x in list(study_config.phases.keys())] + rename_dic = dict(zip(list(study_config.phases.keys()), codes)) + df["Phase"] = df["Phase"].apply(lambda x: rename_dic[x] if x in rename_dic else x) + + + df_s = df.groupby(["AppCode","Phase"])["BlackoutHours"].sum().reset_index() + + df_p = df_s.pivot_table(index=["AppCode"], + values=["BlackoutHours"], + columns=["Phase"], + aggfunc='first') + + #Flatten Column Names (and rearange in correct order + df_p.columns = ['_'.join(col[::-1]).strip() for col in df_p.columns.values] + df_p = df_p.reset_index() + + #if file_name != None: + # serialize.save_pickle(df_p,os.path.join("data","external","intermediate","PhoneDashboard","Gaming",file_name)) + + # We don't calculate blackouthours per day here because we use DaySet as the denomenator + return df_p + + @staticmethod + def _expand_gaming_df(diag,file_name): + ex = diag.loc[diag["Diagnosis"].isin(Gaming.good_diag)] + # Creates list of DatetimeHours that are in BlackoutPeriod + ex["DatetimeHour"] = ex.apply(lambda x: Gaming.get_time_attributes(x, "Hour"), axis=1) + + # Expand the dataframe + ex = ex.explode("DatetimeHour") + ex["DatetimeHour"] = ex["DatetimeHour"].apply(lambda x: x.replace(minute=0, second=0, microsecond=0)) + ex["HourCount"] = ex.groupby(["AppCode", "CreatedDatetime"])["DatetimeHour"].transform('count') + # Evenly divide the blackout period among the hours it occupied + ex["BlackoutHours"] = ex["BlackoutHours"] / ex["HourCount"] + + # Compress onto the App-Hour Level (this compresses multiple blackout events that occured on the same datetime hour) + ex = ex.groupby(["AppCode", "DatetimeHour"])["BlackoutHours"].sum().reset_index() + + config_user_dict = serialize.open_yaml("config_user.yaml") + + if config_user_dict['local']['test']==False: + serialize.save_pickle(ex, os.path.join(Gaming.gaming_dir,file_name)) + return ex + + @staticmethod + def get_time_attributes(df, kind): + start = df["PrevCreatedDatetime"] + end = df["CreatedDatetime"] + if kind == "Hour": + thing = [x for x in pd.date_range(start, end, freq="H")] + else: + thing = [x.weekday() for x in pd.date_range(start, end, freq="D")] + return thing diff --git a/17/replication_package/code/lib/data_helpers/manual_changes.py b/17/replication_package/code/lib/data_helpers/manual_changes.py new file mode 100644 index 0000000000000000000000000000000000000000..d2669130e44a73e48b0d162f5235134b4a016784 --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/manual_changes.py @@ -0,0 +1,66 @@ +import os +import sys +import pandas as pd + +from lib.experiment_specs import study_config +from lib.utilities import codebook +from lib.data_helpers.confidential import Confidential + + +class ManualChanges(): + + @staticmethod + def manual_clean(df,survey,manual_changes_path): + survey_code = study_config.surveys[survey]["Code"] + + appcode_changes = pd.read_excel(manual_changes_path, sheet_name="AppCodes").to_dict(orient='index') + for index, value in appcode_changes.items(): + if value["Old"][0] != "A": + print("AppCodes must begin with 'A'") + sys.exit() + + if value["Old"] not in list(df["AppCode"].unique()): + print(f"Couldn't find old appcode {value['Old']}") + else: + df.loc[df["AppCode"] == value["Old"], "AppCode"] = value["New"] + print(f"Replaced AppCode {value['Old']} with {value['New']}") + + manual_changes = pd.read_excel(manual_changes_path, sheet_name="General") + man_c = manual_changes.to_dict(orient='index') + survey_codes = [study_config.surveys[x]["Code"] + "_" for x in study_config.surveys] + + for key, value in man_c.items(): + variable = value["Variable"] + + # Remove the survey specific code, change in survey data if the survey codes match (i.e. B_ is with Baseline) + if variable.startswith(tuple(survey_codes)): + prefix_code = variable.split("_", 1)[0] + variable = variable.split("_", 1)[1] + + #if the prefix matches the survey's code we are cleaning + if prefix_code == survey_code: + df = ManualChanges.manual_replace(df,value,variable) + + # No Need to Modify Variable if the variable is a main column or a treatment column + elif (variable in study_config.main_cols+study_config.embedded_main_cols) and (variable in df.columns): + df = ManualChanges.manual_replace(df,value,variable) + else: + continue + + return df + + @staticmethod + def manual_replace(df, value, variable): + if variable not in df.columns: + return df + else: + try: + if value["AppCode"] not in list(df["AppCode"].unique()): + print(f"Manual change did not occur for {value['AppCode']}'s {variable} to {value['NewValue']} b/c appcode was not found") + + else: + df.loc[df["AppCode"] == value["AppCode"], variable] = value["NewValue"] + print(f"Changed {variable} for {value['AppCode']} to {value['NewValue']}") + except: + print(f"{variable} does not seem to be in dataframe") + return df \ No newline at end of file diff --git a/17/replication_package/code/lib/data_helpers/pull_events.py b/17/replication_package/code/lib/data_helpers/pull_events.py new file mode 100644 index 0000000000000000000000000000000000000000..1acdf307f81b21edb74df7fd9ed95686971517f2 --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/pull_events.py @@ -0,0 +1,395 @@ +import os +import sys +import shutil +import zipfile +import multiprocessing +import pandas as pd +from datetime import datetime,timedelta + +from lib.data_helpers import data_utils +from lib.experiment_specs import study_config +from lib.data_helpers import test +from lib.data_helpers.builder_utils import BuilderUtils +from lib.data_helpers.gaming import Gaming + +from lib.utilities import serialize + +class PullEvents(): + + port_dir = {"PCDashboard": os.path.join("data","external","dropbox","..","PCPort"), + "PhoneDashboard":os.path.join("data","external","dropbox","..","PhoneDashboardPort")} + + #identifying_cols are a list of columns in the dataframe that identify a row. none of them can be empty + def __init__(self,source: str, keyword: str, scratch: bool, test: bool, + time_cols: list, raw_timezone: str, appcode_col: str, identifying_cols: list, sort_cols: list, + drop_cols: list, cat_cols: list, + compress_type: str, processing_func = BuilderUtils.default_puller_process, file_reader = None): + """ + + Purpose + ------- + + This class appends the raw data files housed in self.zipped_directory, cleans them a bit with the self.processing func, + and saves the merged data file in self.raw_file. Unless self.scratch == True, the puller will only process new raw data files + and appends them to the dataframe saved in self.raw_file. The puller documents which files have been processed in the config located in self.config_file_path + + WARNING: if self.scratch==TRUE, it may take hours or days to reprocess all the data. Call Michael to discuss if this needs to happen. + + Parameters + ---------- + source - source the app data source (either PC or PD) + keyword - the kind of data coming in (e.g. Snooze, Use etc) + scratch - whether or not to start data processing from scratch + test - whether or not to test the pulling pipeline (it will reprocess that latest three zipfiles) (I wouldn't play with this) + time_cols - list of columns that contain datetimes + raw_timezone - the timezone of the raw data + appcode_col - the raw column name containing the appcodes + identifying_cols - list of columns that should identify a unique row + sort_cols - list of columns that describes how the df should be ordered. The df will be sorted from smallest to largest value of the sort_cols + drop_cols - list of columns that appear in the raw zipfile or after zipfile processing that should be dropped + cat_cols - list of columns that can be converted into the categorical data type + compress_type - how the zipfiles are compressed (either "folder", "csv", or "txt") + processing_func - the function used to process the raw data (e.g. clean_master dates) + file_reader - function that reads in the raw file into memory. only required for file stored in zip folders. + """ + + self.source = source + self.keyword = keyword + self.scratch = scratch + self.pull_test = test + self.time_cols = time_cols + self.raw_timezone = raw_timezone + self.appcode_col = appcode_col + self.identifying_cols = identifying_cols + self.sort_cols = sort_cols + self.drop_cols = drop_cols + self.cat_cols = cat_cols + self.compress_type = compress_type + self.processing_func = processing_func + self.file_reader = file_reader + + + self.zipped_directory = os.path.join("data","external", "input", source, f"Raw{self.keyword}Zipped") + self.buggy_dir = os.path.join("data","external", "input", source, "BuggyFiles") + self.config_file_path = os.path.join("data","external", "input", source, f"{self.source}{self.keyword}Config.csv") + self.raw_file = os.path.join("data","external", "input", source, f"Raw{self.keyword}") + self.test_file = os.path.join("data","external", "input_test", source, f"Raw{self.keyword}") + + self.config_user_dict = serialize.open_yaml("config_user.yaml") + + self.builder_utils = BuilderUtils() + self.added_config = self.builder_utils.get_config(self.config_file_path) #dictionary of zipfiles in the raw_file + self.new_config = {} + + + def update_data(self): + print(f"\n Updating {self.keyword} data! {datetime.now()}") + if self.config_user_dict["local"]["test"]==True: + df = serialize.soft_df_open(self.test_file) + df = self._memory_redux(df) + print(df.memory_usage(deep=True) / 1024 ** 2) + return df + + else: + self.builder_utils.transport_new_zip_files(PullEvents.port_dir[self.source], self.keyword, self.zipped_directory) + new_zip_files = [x for x in os.listdir(self.zipped_directory) if (x.replace(".zip", "") not in self.added_config) & (".zip" in x)] + print(f"new zip files {len(new_zip_files)}") + + + # if no new data or scratch run, just load old data + if ((self.scratch == False) & + (len(new_zip_files) == 0) & + (self.pull_test == False)): + print("\t No New Data! Loading pickled raw data into memory") + + if(os.path.getsize(self.raw_file+".pickle") > 0) : + df = serialize.soft_df_open(self.raw_file) + else: + print("Raw file is empty. Reading Backup Compression Zip") + df = serialize.read_gz(self.raw_file+".gz") + df = self._memory_redux(df) + + # Archiving Recruitment Data IF Use Data + if (self.keyword == "Alternative") or (self.keyword == "Use"): + #self._archive_recruitment(df) + df = df.loc[df["CreatedDatetimeHour"] >= study_config.surveys["Baseline"]["Start"]-timedelta(1)] + + return df + + else: + print(f"Length of new zip files {len(new_zip_files)}") + df = self.aggregate_data(new_zip_files) + self.builder_utils.update_config(self.added_config, + self.new_config, + self.config_file_path) + return df + + def aggregate_data(self,new_zip_files): + old_df, new_zip_files = self._configure_data(new_zip_files) + + print(f"\t Going to add {len(new_zip_files)} files:") + if self.config_user_dict['local']["parallel"] == False: + df_dic = self._process_zips(new_zip_files) + + #if the number of files > # of cores, then multiprocess + elif multiprocessing.cpu_count() < len(new_zip_files): + df_dic = self._process_zips_mp(new_zip_files) + + else: + print(f"not multiprocessing b/c number of cores {multiprocessing.cpu_count()} < {len(new_zip_files)}") + df_dic = self._process_zips(new_zip_files) + + #get appcodes to keep + print("\t get data survey") + cl_file = os.path.join(os.path.join("data","external","dropbox_confidential","ContactLists","Used"), + study_config.kept_appcode_cl) + a_df = pd.read_csv(cl_file) + a_df = data_utils.add_A_to_appcode(a_df,"AppCode") + appcodes = list(a_df["AppCode"]) + print(len(appcodes)) + assert len(appcodes) == study_config.number_of_kept_appcodes + + # combine new with old and keep relevant appcodes + if len(df_dic)>0: + print("\t adding new data to master") + new_df = pd.concat(list(df_dic.values()), sort=True) + new_df = new_df.loc[new_df["AppCode"].isin(appcodes)] + parent_df = pd.concat([old_df,new_df], sort=True) + else: + print("\t no new data!") + parent_df = old_df + + # minimal cleaning and save to disk: sort from smallest to largest values of sort_cols + print(f"\t Len before duplicate drop {len(parent_df)}") + parent_df = parent_df.sort_values(by = self.sort_cols) + parent_df = parent_df.drop_duplicates(subset = self.identifying_cols, keep = 'last') + print(f"\t Len after duplicate drop {len(parent_df)}") + + for col in parent_df.columns: + if "Unnamed" in col: + parent_df = parent_df.drop(columns= [col]) + parent_df = parent_df.reset_index(drop=True) + print(parent_df.memory_usage(deep=True) / 1024 ** 2) + + try: + os.rename(self.raw_file+".pickle",self.raw_file+"PrevRun.pickle") + except: + print('could not find old pickle file') + + try: + serialize.save_pickle(parent_df, self.raw_file) + except: + print("Failed to save pickle!") + + try: + test.save_test_df(parent_df, self.test_file) + except: + print("Failed to save testfile") + + # DONT PUT IN TRY BECAUSE IF BACKUP FAILS, WE WANT TO RE PROCESS THE NEW FILES + try: + os.rename(self.raw_file + ".gz", self.raw_file + "PrevRun.gz") + except: + print("no old gz file") + parent_df.to_csv(self.raw_file + '.gz', index=False, compression='gzip') + + print(f"\t Done Saving {datetime.now()}") + + # add new guys to config in memory, now that everything has saved + for zip_file, df in df_dic.items(): + try: + latest_hour = str(df[self.time_cols[0]].max()) + earliest_hour = str(df[self.time_cols[0]].min()) + except: + latest_hour = "NAN" + earliest_hour = "NAN" + self.new_config[zip_file.replace(".zip", "")] = {"EarliestHour": earliest_hour, + "LatestHour": latest_hour, + "ZipFile": zip_file} + return parent_df + + """ modifes the reaggregation in the following way: + - if self.scatch is True: all zipfiles in data dire are new, and old_df is empty + - if self.test is True: reprocess the last three data files, and old_df is reloaded + - else: process the new zip files with old df""" + def _configure_data(self,new_zip_files): + if self.scratch: + all_zip_files = [x for x in os.listdir(self.zipped_directory) if ".zip" in x] + new_zip_files = sorted(all_zip_files) + + #continue_ans = input("Warning: Do you want to reprocess all use files? Enter (Y or N) \n ") + continue_ans = "Y" + if continue_ans == "N": + sys.exit() + print(f"DELETING ADDED EVENT CONFIG, CAUSING REAGGREGATION OF ALL {self.keyword} ZIPFILES") + self.added_config = {} + if self.keyword == "Use": + print("\t Also Deleting all Granular Gaming Files") + for gran_folder in [os.path.join(Gaming.gaming_dir, "FirstLast"), + os.path.join(Gaming.gaming_dir, "Granular")]: + shutil.rmtree(gran_folder) + os.mkdir(gran_folder) + old_df = pd.DataFrame() + + else: + if self.pull_test: + try: + assert len(new_zip_files) == 0 + except: + print("There are actual new zip files to process! First process these files, then test") + sys.exit() + + print("TEST: Reprocessing 3 files from zipped directory") + test_files = os.listdir(self.zipped_directory)[-3:] + for zip_file in test_files: + del self.added_config[zip_file.replace(".zip", "")] + new_zip_files = test_files + + if not os.path.exists((self.raw_file+".pickle")): + old_df = pd.DataFrame() + + elif os.path.getsize(self.raw_file + ".pickle") > 0: + old_df = serialize.soft_df_open(self.raw_file) + else: + print("Raw file is empty. Reading Backup Compression Zip") + old_df = serialize.read_gz(self.raw_file + ".gz") + + if len(old_df)>1: + old_df = self._memory_redux(old_df) + + # Archiving Recruitment Data IF Use Data + if (self.keyword == "Alternative") or (self.keyword == "Use"): + #self._archive_recruitment(old_df) + old_df = old_df.loc[old_df["CreatedDatetimeHour"] >= study_config.surveys["Baseline"]["Start"]-timedelta(1)] + + return old_df, new_zip_files + + def _process_zips_mp(self, new_zip_files: list): + + # create the pool object for multiprocessing + pool = multiprocessing.Pool(processes=study_config.cores) + + # split the files to add into n lists where n = cores + chunks = [new_zip_files[i::study_config.cores] for i in range(study_config.cores)] + + print(f"Multiprocessing with {study_config.cores} cpus") + df_list_of_dics = pool.map(func = self._process_zips, iterable = chunks) + pool.close() + print("Done Multiprocessing") + + # flatten the list of dictionaries + df_dic = {} + for d in df_list_of_dics: + for k, v in d.items(): + df_dic[k] = v + return df_dic + + def _process_zips(self,new_zip_files): + df_dic = {} + i = 1 + for zip_file in new_zip_files: + print(f"processing {zip_file} ({round((i * 100) / len(new_zip_files))}%)", end = "\r") + df = self._process_zip(zip_file) + if len(df) == 0: + continue + df_dic[zip_file] = df + i = i + 1 + return df_dic + + def _process_zip(self, zip_file): + if ".zip" not in zip_file: + return pd.DataFrame() + + # open zipfile + if self.compress_type != "folder": + df, problem = self._open_zipfile(zip_file) + + # open json zip folder + else: + df = self._open_zipfolder(zip_file) + problem = "" + + if len(df) == 0: + if problem != "Corrupt": + problem = "Empty" + print(f"{zip_file} is empty ...archiving in {self.buggy_dir}/{problem}") + + old_file = os.path.join(self.zipped_directory, zip_file) + new_file = os.path.join(self.buggy_dir, problem, zip_file) + os.rename(old_file, new_file) + return df + + df.columns = df.columns.str.replace(' ', '') + df = df.rename(columns={self.appcode_col: "AppCode"}) + df["AppCode"] = df["AppCode"].astype(str).apply(lambda x: "A" + x if x != "nan" and x.isnumeric() else x) + df = self.processing_func(df, zip_file, self) + + df["Zipfile"] = zip_file + df = self._memory_redux(df) + + return df + + def _open_zipfile(self,zip_file): + if self.compress_type == "txt": + separator = '\t' + elif self.compress_type == "csv": + separator = ',' + else: + separator = "" + print("Illegal seperator") + sys.exit() + try: + df = pd.read_csv(os.path.join(self.zipped_directory, zip_file), compression='zip', header=0, + sep=separator, quotechar='"') + problem = "None" + except: + print(f"{zip_file} is corrupt. Investigate in archive") + df = pd.DataFrame() + problem = "Corrupt" + return df, problem + + def _open_zipfolder(self,zip_folder): + #print(zip_folder) + #print(self.zipped_directory) + zip_ref = zipfile.ZipFile(os.path.join(self.zipped_directory, zip_folder)) + temp_folder = os.path.join(self.zipped_directory, zip_folder.replace(".zip", "FOLDER")) + zip_ref.extractall(path=temp_folder) + df_list = [] + for file in os.listdir(temp_folder): + file_path = os.path.join(temp_folder, file) + clean_data = self.file_reader(file_path) + df_list.append(clean_data) + shutil.rmtree(temp_folder) + df = pd.concat(df_list) + return df + + def _open_zipfolders_install(self,zip_folders): + """for recovering install data""" + for zip_folder in zip_folders: + df = self._open_zipfolder(zip_folder) + + def _memory_redux(self,df): + # drop columns that were initially in raw data or processing byproduct + for drop_col in self.drop_cols + ["Server","AsOf"]: + if drop_col in df.columns.values: + df = df.drop(columns = [drop_col]) + + # make columns categorical + for cat_col in self.cat_cols + ["Zipfile"]: + if cat_col in df.columns.values: + try: + df[cat_col] = df[cat_col].astype('category') + except: + print(f"\t categorizing {cat_col} failed!") + + return df + + def _archive_recruitment(self,df): + if os.path.exists(self.raw_file + "RecruitmentPhase.pickle"): + pass + else: + r_df_raw = df.loc[df["CreatedDatetimeHour"] < study_config.surveys["Baseline"]["Start"]-timedelta(1)] + try: + serialize.save_pickle(r_df_raw, self.raw_file + "RecruitmentPhase.pickle") + except: + r_df_raw.to_csv(self.raw_file + 'RecruitmentPhase.gz', index=False, compression='gzip') diff --git a/17/replication_package/code/lib/data_helpers/pull_survey.py b/17/replication_package/code/lib/data_helpers/pull_survey.py new file mode 100644 index 0000000000000000000000000000000000000000..49c8175064cc9d1a3bad0423a20089c3b5837ead --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/pull_survey.py @@ -0,0 +1,73 @@ +import os +import pandas as pd + +""" +Input: +- str path = directory to store the raw survey +- str surveyname = name of survey. must be a survey in the study_config +""" + +class PullSurvey(): + # Qualtrics API Token + apiToken = "giId4AaEUZa4PIHNI49TR1g4tzEs3zrm6GRcyVcr" + # Qualtrics User ID + userId = "UR_2sPkRisvCCNPi73" + # Qualtrics Response Export File Format + fileFormat = "csv" + # Qualtrics Data Center (defaulting to 'ca1' is fine) + dataCenter = "ca1" + + def pull_qualtrics(self,path,survey_name): + from lib.experiment_specs import study_config + from lib.data_helpers import qualtricsapi2 + + # Get Responses in Progress and Completes + for sub_name, bool_str in {"Finished":"false","InProgress":"true"}.items(): + + qualtricsapi2.export_survey(apiToken=self.apiToken, + surveyId=study_config.surveys[survey_name]["QualtricsID"], + fileFormat=self.fileFormat, + dataCenter=self.dataCenter, + downloadDir = path, + exportResponsesInProgress=bool_str) + + + new_path = os.path.join(path,survey_name + sub_name + ".csv") + if os.path.isfile(new_path): + os.remove(new_path) + + # get the new raw file name by matching it to the clean file name,after removing spaces and hyphens from raw file + old_file = [] + for file in os.listdir(path): + matching_file = file.replace(" ","").replace("-","") + if (survey_name in matching_file) & \ + ("Finished" not in matching_file) &\ + ("InProgress" not in matching_file) &\ + (survey_name+".csv" != matching_file): + old_file.append(file) + try: + assert len(old_file) == 1 + except: + print(f" either 0 or more than one files with {survey_name} in file name in {path}") + sys.exit() + os.rename(os.path.join(path,old_file[0]),new_path) + + finished = pd.read_csv(os.path.join(path,survey_name+"Finished.csv")) + in_progress = pd.read_csv(os.path.join(path,survey_name+"InProgress.csv")) + in_progress = in_progress.iloc[2:,] + full = finished.append(in_progress) + full = full.replace({r'\r': ''}, regex=True) + full.to_csv(os.path.join(path,survey_name+".csv"), index = False) + print("\t Successful Download!") + +if __name__ == "__main__": + import sys + import git + # root directory of github repo + root = git.Repo('.', search_parent_directories = True).working_tree_dir + os.chdir(root) + + sys.path.append(root) + survey_name = input("Which Survey to Pull for Main Pipeline? ") + path = os.path.join("data","external","dropbox_confidential","Surveys") + PullSurvey().pull_qualtrics(path,survey_name) \ No newline at end of file diff --git a/17/replication_package/code/lib/data_helpers/qualtricsapi2.py b/17/replication_package/code/lib/data_helpers/qualtricsapi2.py new file mode 100644 index 0000000000000000000000000000000000000000..840216d8caf6460e7c53e890e1625edbbc7c887b --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/qualtricsapi2.py @@ -0,0 +1,51 @@ +import requests +import zipfile +import json +import io, os +import sys +import re + + +def export_survey(apiToken, surveyId, dataCenter, fileFormat,downloadDir, exportResponsesInProgress): + surveyId = surveyId + fileFormat = fileFormat + dataCenter = dataCenter + + # Setting static parameters + requestCheckProgress = 0.0 + progressStatus = "inProgress" + baseUrl = "https://{0}.qualtrics.com/API/v3/surveys/{1}/export-responses/".format(dataCenter, surveyId) + headers = { + "content-type": "application/json", + "x-api-token": apiToken, + } + + # Step 1: Creating Data Export + downloadRequestUrl = baseUrl + downloadRequestPayload = '{"format":"' + fileFormat + '",' '"useLabels": true, "exportResponsesInProgress":' + exportResponsesInProgress +'}' + downloadRequestResponse = requests.request("POST", downloadRequestUrl, data=downloadRequestPayload, headers=headers) + progressId = downloadRequestResponse.json()["result"]["progressId"] + print(downloadRequestResponse.text) + + # Step 2: Checking on Data Export Progress and waiting until export is ready + while progressStatus != "complete" and progressStatus != "failed": + print("progressStatus=", progressStatus) + requestCheckUrl = baseUrl + progressId + requestCheckResponse = requests.request("GET", requestCheckUrl, headers=headers) + requestCheckProgress = requestCheckResponse.json()["result"]["percentComplete"] + print("Download is " + str(requestCheckProgress) + " complete") + progressStatus = requestCheckResponse.json()["result"]["status"] + + # step 2.1: Check for error + if progressStatus is "failed": + raise Exception("export failed") + + fileId = requestCheckResponse.json()["result"]["fileId"] + + # Step 3: Downloading file + requestDownloadUrl = baseUrl + fileId + '/file' + requestDownload = requests.request("GET", requestDownloadUrl, headers=headers, stream=True) + + # Step 4: Unzipping the file + zipfile.ZipFile(io.BytesIO(requestDownload.content)).extractall(downloadDir) + print('Complete') \ No newline at end of file diff --git a/17/replication_package/code/lib/data_helpers/test.py b/17/replication_package/code/lib/data_helpers/test.py new file mode 100644 index 0000000000000000000000000000000000000000..8826c4628ce947880d23e791697258fce7f88685 --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/test.py @@ -0,0 +1,46 @@ +import os +from lib.utilities import serialize +from lib.data_helpers import data_utils +from lib.experiment_specs import study_config + + +def select_test_appcodes(mc): + """ + purpose: selects a set of appcodes whose data will be used to test the pipeline quickly. Specifically, + it selects 50 active appcodes (i.e. have use data in past 3 days) and 25 appcodes inactive appcodes + + input: the clean master user df + """ + last_survey_complete = data_utils.get_last_survey() + code = study_config.surveys[last_survey_complete]["Code"] + print(f"\n Selecting test codes. last survey complete is {last_survey_complete}") + active_appcodes = list(mc.loc[(mc[f"{code}_Complete"]=="Complete")&(mc["ActiveStatus"]=="Normal"),"AppCode"]) + inactive_appcodes = list(mc.loc[(mc[f"R_Complete"]!="Complete"),"AppCode"]) + test_codes = {"AppCode":active_appcodes[50:100]+inactive_appcodes[25:50]} + serialize.save_pickle(test_codes, path = os.path.join("data","external","dropbox_confidential_test","TestCodes"),df_bool = False) + return test_codes + +def save_test_df(df, path): + """ + subsets the df to include testcodes + + Parameters + ---------- + df - any df that is about to get saved + path - the path to save the df (in the test data folders) + + """ + # only subset the df if the run is not a test b/c during a test run, the file has already been subsetted! + config_user_dict = serialize.open_yaml("config_user.yaml") + if config_user_dict["local"]["test"] == False: + test_codes = serialize.open_pickle(os.path.join("data", "external", "dropbox_confidential_test", "TestCodes"), df_bool=False) + test_appcodes = test_codes["AppCode"] + df = df.loc[df["AppCode"].isin(test_appcodes)] + + serialize.save_pickle(df,path) + + print(df.dtypes) + try: + serialize.save_hdf(df, path) + except: + print("couldn't save hdf!") \ No newline at end of file diff --git a/17/replication_package/code/lib/data_helpers/treatment.py b/17/replication_package/code/lib/data_helpers/treatment.py new file mode 100644 index 0000000000000000000000000000000000000000..20e3118ca688abf9a5d0a1f09f58d7b08398bda7 --- /dev/null +++ b/17/replication_package/code/lib/data_helpers/treatment.py @@ -0,0 +1,92 @@ + +import pandas as pd +from stochatreat import stochatreat + +import random +from functools import reduce +random.seed(13984759) + + +"""Assigns treatments in prep for the assignment Survey. If there is a used_cl, the assign_treatment function will use +ActualUse data from the used CL + - Treatment Assignment will use data from the old_phase + - New treatment assignment will take effect in the new_phase""" + +class Treatment(): + + def __init__(self,seed): + self.i = seed + + def prepare_strat(self, df,continuous_strat,discrete_strat): + #discretize continuous strat + for var in continuous_strat: + df[var] = df[var].astype(float) + median = df[var].median() + df.loc[df[var] >= median,var+"Strat"] = f"{var}High" + df.loc[df[var] < median, var + "Strat"] = f"{var}Low" + + #If var is missing a strat value, put in low + df.loc[df[var].isnull(), var + "Strat"] = f"{var}Low" + + #label discrete strat + for var in discrete_strat: + df[var+"Strat"] = df[var].astype(str).apply(lambda x: var+x) + + #compose strat var + strat_vars = [x+"Strat" for x in discrete_strat+continuous_strat] + df["Stratifier"] = df[strat_vars].values.tolist() + df["Stratifier"] = df["Stratifier"].apply(lambda x: 'X'.join(x)) + return df + + """ Used to assign randomly assign treatment to a subset of the data. The varname should already be in the data set. The function only modifies the values for the subset + + Input: + - df: the full dataframe + - subset_var: the categorical vairable used to subset + - subset_val: the value of subset_var we will keep + - inputs to _assign_treat_var + + Outpu: + - the full df with the varname randomly filled""" + + def subset_treat_var_wrapper(self,df, subset_var, subset_val, rand_dict: dict, stratum_cols: list, varname): + r_varname = "Randomized" + varname + df_dl = self.assign_treat_var(df=df.loc[df[subset_var] == subset_val], + rand_dict=rand_dict, + stratum_cols=stratum_cols, + varname=r_varname) + + df = df.merge(df_dl[["AppCode", r_varname]], how='left', on="AppCode") + df.loc[df[subset_var] == subset_val, varname] = df.loc[ + df[subset_var] == subset_val, r_varname] + df = df.drop(columns=[r_varname]) + return df + + def assign_treat_var(self, df: pd.DataFrame, rand_dict: dict, stratum_cols: list, varname: str): + treats = stochatreat(data=df, + stratum_cols=stratum_cols, + treats=len(rand_dict.keys()), + probs=list(rand_dict.values()), + idx_col='AppCode', + random_state= self.i, + misfit_strategy='stratum' + ) + + self.i = self.i * 2 + + raw_treat_values = range(len(rand_dict.keys())) + treat_labels = list(rand_dict.keys()) + translate_dict = dict(zip(raw_treat_values, treat_labels)) + treats[varname] = treats["treat"].apply(lambda x: translate_dict[x]) + + treat_tab = pd.crosstab(treats["stratum_id"], treats[varname], margins=True).reset_index() + + df = df.merge(treats[["AppCode", varname]], on="AppCode") + # Assert that the number of strata equal the number of rows when group the treatment df by strata + value_len = [len(df[x].unique()) for x in stratum_cols] + number_of_strata = reduce(lambda x, y: x * y, value_len) + + assert len(treats["stratum_id"].unique()) == number_of_strata + return df + + diff --git a/17/replication_package/code/lib/experiment_specs/FITSBY_apps.xlsx b/17/replication_package/code/lib/experiment_specs/FITSBY_apps.xlsx new file mode 100644 index 0000000000000000000000000000000000000000..702d47b868d256d261f86eae20fed54846848635 --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/FITSBY_apps.xlsx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a1e31dd43403c07a8f0338095ae494b4b57d5bfabf7a692bde29c02c481c0f02 +size 12291 diff --git a/17/replication_package/code/lib/experiment_specs/ManualCodebookSpecs.xlsx b/17/replication_package/code/lib/experiment_specs/ManualCodebookSpecs.xlsx new file mode 100644 index 0000000000000000000000000000000000000000..684392d73110332b06a71093b57062853b9a1dda --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/ManualCodebookSpecs.xlsx @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:75f93b4c83ed9cfa7e9ecc5ba26f0144616404afeadffd59c1686c3ba1a23b89 +size 13303 diff --git a/17/replication_package/code/lib/experiment_specs/PhoneAddictionLibs.txt b/17/replication_package/code/lib/experiment_specs/PhoneAddictionLibs.txt new file mode 100644 index 0000000000000000000000000000000000000000..f49f842d9cdafbcab8b5e0eb4d8740d0823950ff --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/PhoneAddictionLibs.txt @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:9a394528cde679b4acc4e9cca7d73eeeffe2351b0ddbf0fb1604444943bd3125 +size 46 diff --git a/17/replication_package/code/lib/experiment_specs/README.md b/17/replication_package/code/lib/experiment_specs/README.md new file mode 100644 index 0000000000000000000000000000000000000000..c53c1491e4cde0b0c402f981f16d01428088c45a --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/README.md @@ -0,0 +1,36 @@ +The experiment_specs folder provides detailed input specifications for the data pipeline. We keep configurations in /lib, instead of /data because they may be called by other modules, including + +0. study_config defines a bunch of specific parameters for the data pipeline, including: + - detailed characteristics of each survey, and how the phases and surveys are connected + - certain assertions related to the number observations that should be in the final data set + - labelling a few special variables + +1. ManualCodebookSpecs.xlsx: set for manual variable naming, and labelling. Notice that all values are PREFIX AGNOSTIC... i.e. don't include "B_" in the variable name + The file has the following columns: + - VariableName: The variable name that will appear in the final codebook (data/final/ExpandedCodebook.csv). This must be entered. + + - VariableLabel: The variable label that will appear in the final codebook. If this is left empty, and if the variable comes from a survey, + then the original question will be the variable label. There are 5 macros you can include in a manual label. Suppose we had the variable B_DailyPhoneUse, since the variable has the Baseline prefix B, the macros will equal: + - [PrevSurvey] = Recruitment + - [Survey] = Baseline + - [NextSurvey] = Midline + - [OldPhase] = Pre-Study + - [NewPhase] = Baseline Phase + - to see a full specification of these macros for each survey, check out lib/temptetation_specs/label_code_dic.json + + - DataType: the the data type you want the variable values to be + + - RawVariableName: the raw survey variable name you want to rename + + - PrefixEncoding: The type of variable, it could equal Main(a variable that is specified once throughout the study, like AppCode), Survey (for survey variables), NewPhase (for variables that represent data in the next phase), or OldPhase (for variables that represent data in the previous phase). + These values are used when creating the final labels, if no macros are specified in the Variable label. + For example, if B_ActualUse is set to NewPhase, the study_config dictates that the BaselinePhase is the NewPhase, so the variable label will equal "Baseline Phase: Avg Daily Use". + + - If you leave a field blank in the ManualCodebookSpecs, then the default option for that field will be used + +2. value_labels.yaml - specifies how to encode certain categorical variables + +3. varsets.py - defines the outcome variabels associated with the indices used for midline stratification (applied in data/source/clean_master/management/midline_prep.py) + +4. FITSBY_apps.xlsx - matches the raw app names (from the PD data) with the FITSBY name + diff --git a/17/replication_package/code/lib/experiment_specs/__init__.py b/17/replication_package/code/lib/experiment_specs/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/lib/experiment_specs/label_code_dic.json b/17/replication_package/code/lib/experiment_specs/label_code_dic.json new file mode 100644 index 0000000000000000000000000000000000000000..808b1eba2fbc608a008ef4ecdeef04e06b499282 --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/label_code_dic.json @@ -0,0 +1 @@ +{"Recruitment": {"[PreviousSurvey]": "Doesn't Exist", "[Survey]": "Recruitment", "[NextSurvey]": "Baseline", "[OldPhase]": "PreStudy", "[NewPhase]": "Phase0"}, "Baseline": {"[PreviousSurvey]": "Recruitment", "[Survey]": "Baseline", "[NextSurvey]": "Midline", "[OldPhase]": "Phase0", "[NewPhase]": "Phase1"}, "Midline": {"[PreviousSurvey]": "Baseline", "[Survey]": "Midline", "[NextSurvey]": "Endline1", "[OldPhase]": "Phase1", "[NewPhase]": "Phase2"}, "Endline1": {"[PreviousSurvey]": "Midline", "[Survey]": "Endline1", "[NextSurvey]": "Endline2", "[OldPhase]": "Phase2", "[NewPhase]": "Phase3"}, "Endline2": {"[PreviousSurvey]": "Endline1", "[Survey]": "Endline2", "[NextSurvey]": "Phase5Start", "[OldPhase]": "Phase3", "[NewPhase]": "Phase4"}, "Phase5Start": {"[PreviousSurvey]": "Endline2", "[Survey]": "Phase5Start", "[NextSurvey]": "Phase6Start", "[OldPhase]": "Phase4", "[NewPhase]": "Phase5"}, "Phase6Start": {"[PreviousSurvey]": "Phase5Start", "[Survey]": "Phase6Start", "[NextSurvey]": "Phase7Start", "[OldPhase]": "Phase5", "[NewPhase]": "Phase6"}, "Phase7Start": {"[PreviousSurvey]": "Phase6Start", "[Survey]": "Phase7Start", "[NextSurvey]": "Phase8Start", "[OldPhase]": "Phase6", "[NewPhase]": "Phase7"}, "Phase8Start": {"[PreviousSurvey]": "Phase7Start", "[Survey]": "Phase8Start", "[NextSurvey]": "Phase9Start", "[OldPhase]": "Phase7", "[NewPhase]": "Phase8"}, "Phase9Start": {"[PreviousSurvey]": "Phase8Start", "[Survey]": "Phase9Start", "[NextSurvey]": "Phase10Start", "[OldPhase]": "Phase8", "[NewPhase]": "Phase9"}} \ No newline at end of file diff --git a/17/replication_package/code/lib/experiment_specs/study_config.py b/17/replication_package/code/lib/experiment_specs/study_config.py new file mode 100644 index 0000000000000000000000000000000000000000..dc7076c8e971e201a1e4bea81946aaff91214a03 --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/study_config.py @@ -0,0 +1,547 @@ +from datetime import datetime, timedelta +from lib.utilities import serialize + +config_dic = serialize.open_yaml("config.yaml") +experiment_name = config_dic["experiment_name"] +config_user_dict = serialize.open_yaml("config_user.yaml") + +""" +Each survey should have the following values in the surveys dictionary: + - Name: the name of the survey, as it will appear in files (it should be identical to the key) + - Start: the datetime the survey is sent to participants + - End: the datetime the survey is closed to participants + - Code: the code that prefixes all survey specific variables that come from the survey + - OldPhase: The phase in the study before the survey was administered (OPTIONAL) + - NewPhase: The phase in the study after the survey was administered (OPTIONAL) + - FirstQuestion: in the raw survey data, all columns that appear to the left of the FirstQuestion will be dropped. + Used to remove unwanted embedded data. Exceptions are made for columns in the study_config.main_cols list + or the study_config.kept_survey_data list + - Last Question: in the raw survey data, all columns that appear to the right of the LastQuestion will be dropped. + Used to remove unwanted embedded data. Exceptions are made for columns in the study_config.main_cols list + or the study_config.kept_survey_data list + - CompleteQuestion: the raw survey column that indicates the user completed the survey, if the user filled in + this question. Typically the last mandatory question in the survey. + - RawEmailCol: the column in the raw qualtrics survey that contains the email address of the participant + - QualtricsID: an ID qualtrics provide for pulling the survey from qualtrics + - QualtricsName: the raw name of the survey when it is first downloaded from qualtrics +""" +main_surveys = ["Recruitment","Baseline","Midline","Endline1","Endline2"] +text_surveys = ["TextSurvey"+str(x) for x in range(1,10)] +filler_surveys = ["Phase"+str(x)+"Start" for x in range(5,15)] + +surveys = { + + "Recruitment": { + "Name": "Recruitment", + "Start": config_dic["surveys"]["Recruitment"]["Start"], + "End": config_dic["surveys"]["Recruitment"]["End"], + "Code": "R", + "OldPhase": "PreStudy", + "NewPhase": "Phase0", + "FirstQuestion": "Country", + "LastQuestion": "GeneralFeedbackLong", + "CompleteQuestion": "PhoneOffTime12", + "RawEmailCol": "EmailConfirm", + "QualtricsID": "SV_cMGlDlNJGybHned", + "QualtricsName": "Phone Addiction Experiment/ Recruitment", }, + + "Baseline": { + "Name": "Baseline", + "Start": config_dic["surveys"]["Baseline"]["Start"], + "End": config_dic["surveys"]["Baseline"]["End"], + "Code": "B", + "OldPhase": "Phase0", + "NewPhase": "Phase1", + "FirstQuestion": "QualityCheck", + "LastQuestion": "GeneralFeedbackLong", + "CompleteQuestion":"Source", + "RawEmailCol":"Email", + "QualtricsID": "SV_3EHxo2vK2MVq4U5", + "QualtricsName": "Phone Addiction Experiment/ Baseline", + }, + + "Midline": { + "Name": "Midline", + "Start": config_dic["surveys"]["Midline"]["Start"], + "End": config_dic["surveys"]["Midline"]["End"], + "Code": "M", + "OldPhase": "Phase1", + "NewPhase": "Phase2", + "FirstQuestion": "Commitment", + "LastQuestion":"GeneralFeedbackLong", + "CompleteQuestion": "PredictUseNext3", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_eQxY6aqS3idzuOp", + "QualtricsName": "Phone Addiction Experiment/ Midline", + }, + + "Endline1": { + "Name": "Endline1", + "Start": config_dic["surveys"]["Endline1"]["Start"], + "End": config_dic["surveys"]["Endline1"]["End"], + "Code": "E1", + "OldPhase": "Phase2", + "NewPhase": "Phase3", + "FirstQuestion": "WellBeing1", + "LastQuestion":"GeneralFeedbackLong", + "CompleteQuestion": "PredictUseNext3", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_4IqfwzIhTKAj2Id", + "QualtricsName": "Phone Addiction Experiment/ Endline1",}, + + "Endline2": { + "Name": "Endline2", + "Start": config_dic["surveys"]["Endline2"]["Start"], + "End": config_dic["surveys"]["Endline2"]["End"], + "Code": "E2", + "OldPhase": "Phase3", + "NewPhase": "Phase4", + "FirstQuestion": "WellBeing1", + "LastQuestion": "GeneralFeedbackLong", + "CompleteQuestion": "PDFeedbackScale", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_6St5LOtmKbYCS5D", + "QualtricsName": "Phone Addiction Experiment/ Endline2", }, + + "Phase5Start": { + "Name": "Phase5Start", + "Start": config_dic["surveys"]["Phase5Start"]["Start"], + "End": config_dic["surveys"]["Phase5Start"]["End"], + "Code": "P5", + "OldPhase": "Phase4", + "NewPhase": "Phase5", + "FirstQuestion": "", + "LastQuestion": "", + "CompleteQuestion": "", + "RawEmailCol": "", + "QualtricsID": "", + "QualtricsName": "", }, + + "Phase6Start": { + "Name": "Phase6Start", + "Start": config_dic["surveys"]["Phase6Start"]["Start"], + "End": config_dic["surveys"]["Phase6Start"]["End"], + "Code": "P6", + "OldPhase": "Phase5", + "NewPhase": "Phase6", + "FirstQuestion": "", + "LastQuestion": "", + "CompleteQuestion": "", + "RawEmailCol": "", + "QualtricsID": "", + "QualtricsName": "", }, + + "Phase7Start": { + "Name": "Phase7Start", + "Start": config_dic["surveys"]["Phase7Start"]["Start"], + "End": config_dic["surveys"]["Phase7Start"]["End"], + "Code": "P7", + "OldPhase": "Phase6", + "NewPhase": "Phase7", + "FirstQuestion": "", + "LastQuestion": "", + "CompleteQuestion": "", + "RawEmailCol": "", + "QualtricsID": "", + "QualtricsName": "", }, + + "Phase8Start": { + "Name": "Phase8Start", + "Start": config_dic["surveys"]["Phase8Start"]["Start"], + "End": config_dic["surveys"]["Phase8Start"]["End"], + "Code": "P8", + "OldPhase": "Phase7", + "NewPhase": "Phase8", + "FirstQuestion": "", + "LastQuestion": "", + "CompleteQuestion": "", + "RawEmailCol": "", + "QualtricsID": "", + "QualtricsName": "", }, + + "Phase9Start": { + "Name": "Phase9Start", + "Start": config_dic["surveys"]["Phase9Start"]["Start"], + "End": config_dic["surveys"]["Phase9Start"]["End"], + "Code": "P9", + "OldPhase": "Phase8", + "NewPhase": "Phase9", + "FirstQuestion": "", + "LastQuestion": "", + "CompleteQuestion": "", + "RawEmailCol": "", + "QualtricsID": "", + "QualtricsName": "", }, + + "Phase10Start": { + "Name": "Phase10Start", + "Start": config_dic["surveys"]["Phase10Start"]["Start"], + "End": config_dic["surveys"]["Phase10Start"]["End"], + "Code": "P10", + "OldPhase": "Phase9", + "NewPhase": "Phase10", + "FirstQuestion": "", + "LastQuestion": "", + "CompleteQuestion": "", + "RawEmailCol": "", + "QualtricsID": "", + "QualtricsName": "", }, + + "Enrollment": { + "Name": "Enrollment", + "Start": config_dic["surveys"]["Enrollment"]["Start"], + "End": config_dic["surveys"]["Enrollment"]["End"], + "Code": "ET", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "Q1", + "LastQuestion": "Q1", + "CompleteQuestion": "AppCode", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_cRXVtKEhyupnrCJ", + "QualtricsName": "PhoneAddiction Enrollment Text"}, + + "WeeklyText": { + "Name": "WeeklyText", + "Start": config_dic["surveys"]["WeeklyText"]["Start"], + "End": config_dic["surveys"]["WeeklyText"]["End"], + "Code": "WT", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "Q1", + "LastQuestion": "Q9", + "CompleteQuestion": "AppCode", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "", + "QualtricsName": "Phone Addiction Pilot6 - Texts Weekly Addiction Msgs"}, + + "PDBug": { + "Name": "PDBug", + "Start": config_dic["surveys"]["PDBug"]["Start"], + "End": config_dic["surveys"]["PDBug"]["End"], + "Code": "PD", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "PDBug", + "LastQuestion": "PDBugLong", + "CompleteQuestion": "PDBug", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_8jsnuErKgFLWm2h", + "QualtricsName": "Phone Addiction PDBug"}, + + "TextSurvey1": { + "Name": "TextSurvey1", + "Start": config_dic["surveys"]["TextSurvey1"]["Start"], + "End": config_dic["surveys"]["TextSurvey1"]["End"], + "Code": "T1", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText1", + "LastQuestion": "AddictionText1", + "CompleteQuestion": "AddictionText1", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_1zhnYQqQOfchrDL", + "QualtricsName": "PhoneAddiction Text Survey - 1"}, + + "TextSurvey2": { + "Name": "TextSurvey1", + "Start": config_dic["surveys"]["TextSurvey2"]["Start"], + "End": config_dic["surveys"]["TextSurvey2"]["End"], + "Code": "T2", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText2", + "LastQuestion": "AddictionText2", + "CompleteQuestion": "AddictionText2", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_39qIJ0F4pu0hFCl", + "QualtricsName": "PhoneAddiction Text Survey - 2"}, + + "TextSurvey3": { + "Name": "TextSurvey3", + "Start": config_dic["surveys"]["TextSurvey3"]["Start"], + "End": config_dic["surveys"]["TextSurvey3"]["End"], + "Code": "T3", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText3", + "LastQuestion": "AddictionText3", + "CompleteQuestion": "AddictionText3", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_2bCk47THvm8oN3D", + "QualtricsName": "PhoneAddiction Text Survey - 3"}, + + "TextSurvey4": { + "Name": "TextSurvey4", + "Start": config_dic["surveys"]["TextSurvey4"]["Start"], + "End": config_dic["surveys"]["TextSurvey4"]["End"], + "Code": "T4", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText4", + "LastQuestion": "AddictionText4", + "CompleteQuestion": "AddictionText4", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_9nl3sAj94B5sVrD", + "QualtricsName": "PhoneAddiction Text Survey - 4"}, + + "TextSurvey5": { + "Name": "TextSurvey5", + "Start": config_dic["surveys"]["TextSurvey5"]["Start"], + "End": config_dic["surveys"]["TextSurvey5"]["End"], + "Code": "T5", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText5", + "LastQuestion": "AddictionText5", + "CompleteQuestion": "AddictionText5", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_6GtwyNSpJZ79ptP", + "QualtricsName": "PhoneAddiction Text Survey - 5"}, + + "TextSurvey6": { + "Name": "TextSurvey6", + "Start": config_dic["surveys"]["TextSurvey6"]["Start"], + "End": config_dic["surveys"]["TextSurvey6"]["End"], + "Code": "T6", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText6", + "LastQuestion": "AddictionText6", + "CompleteQuestion": "AddictionText6", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_5gyEVUJovOxg7vT", + "QualtricsName": "PhoneAddiction Text Survey - 6"}, + + "TextSurvey7": { + "Name": "TextSurvey7", + "Start": config_dic["surveys"]["TextSurvey7"]["Start"], + "End": config_dic["surveys"]["TextSurvey7"]["End"], + "Code": "T7", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText7", + "LastQuestion": "AddictionText7", + "CompleteQuestion": "AddictionText7", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_cxctTobi21Mt7OR", + "QualtricsName": "PhoneAddiction Text Survey - 7"}, + + "TextSurvey8": { + "Name": "TextSurvey8", + "Start": config_dic["surveys"]["TextSurvey8"]["Start"], + "End": config_dic["surveys"]["TextSurvey8"]["End"], + "Code": "T8", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText8", + "LastQuestion": "AddictionText8", + "CompleteQuestion": "AddictionText8", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_7824o4AljFSjxyZ", + "QualtricsName": "PhoneAddiction Text Survey - 8"}, + + "TextSurvey9": { + "Name": "TextSurvey9", + "Start": config_dic["surveys"]["TextSurvey9"]["Start"], + "End": config_dic["surveys"]["TextSurvey9"]["End"], + "Code": "T9", + "OldPhase": "na", + "NewPhase": "na", + "FirstQuestion": "AddictionText9", + "LastQuestion": "AddictionText9", + "CompleteQuestion": "AddictionText9", + "RawEmailCol": "RecipientEmail", + "QualtricsID": "SV_6ydCZmtdAfwAQU5", + "QualtricsName": "PhoneAddiction Text Survey - 9"}, +} + +phases = { + "Phase0": { + "Label": "Recruitment Phase", + "StartSurvey":surveys["Recruitment"], + "EndSurvey":surveys["Baseline"], + }, + + "Phase1": { + "Label": "Baseline Phase", + "StartSurvey":surveys["Baseline"], + "EndSurvey":surveys["Midline"], + }, + + "Phase2": { + "Label": "Treatment Phase", + "StartSurvey": surveys["Midline"], + "EndSurvey": surveys["Endline1"], + }, + "Phase3": { + "Label": "Treatment II Phase", + "StartSurvey": surveys["Endline1"], + "EndSurvey": surveys["Endline2"], + }, + + "Phase4": { + "Label": "Phase 4", + "StartSurvey": surveys["Endline2"], + "EndSurvey": surveys["Phase5Start"], + }, + + "Phase5": { + "Label": "Phase 5", + "StartSurvey": surveys["Phase5Start"], + "EndSurvey": surveys["Phase6Start"], + }, + + "Phase6": { + "Label": "Phase 6", + "StartSurvey": surveys["Phase6Start"], + "EndSurvey": surveys["Phase7Start"], + }, + + "Phase7": { + "Label": "Phase 7", + "StartSurvey": surveys["Phase7Start"], + "EndSurvey": surveys["Phase8Start"], + }, + "Phase8": { + "Label": "Phase 8", + "StartSurvey": surveys["Phase8Start"], + "EndSurvey": surveys["Phase9Start"], + }, + "Phase9": { + "Label": "Phase 9", + "StartSurvey": surveys["Phase9Start"], + "EndSurvey": surveys["Phase10Start"], + }, + } + +# first day we collect data +first_pull = config_dic["date_range"]["first_pull"] + +# last day we collect data +last_pull = config_dic["date_range"]["last_pull"] + +# the first survey master_raw_user will be built upon +initial_master_survey = "Recruitment" + +# the survey in which appcodes are initially recorded +appcode_survey = "Recruitment" + +# All appcodes in this contact list will be kept in PD data processing. all others will be dropped. +kept_appcode_cl = "BaselineCompletes_KeptAppCodes_4202020.csv" +number_of_kept_appcodes = 4043 + +# the survey in which participants are randomly assigned to treatment groups +randomize_survey = "Midline" + +# used contact lists names (that will override embedded data in the clean master object) +used_contact_lists = {"Baseline": "BaselineContacts_04122020.csv", + "Midline":"MidlineContacts 050220 final.csv", + "Endline1":"Endline1Contacts_20200523.csv", + "Endline2":"Endline2Contacts_20200613.csv"} + +#variable name for people assigned to an RSI (Delayed or Intermediate or None) +rsi_assign_var = "BonusTreatment" + +# variable name for people that actually have an rsi (note this is phase specific) +rsi_var = "RSIStatus" + +#use variable used for benchmark and earnings calc and midline stratification +use_var = "FITSBYUseMinutes" + +#apps that make up the fitsby app set +fitsby = ["facebook","instagram","twitter","snapchat","browser","youtube"] + +# the number of people that we know completed the initial master survey (we hard code and assert in case anything gets screwed up in the pipeline) +sample_size = 26101 + +#seed for randomization +seed = 329 + +#number of cores used when parallelizing +cores = config_user_dict['local']['cores'] + +# if user hasn't uploaded data since this day, the user is considered inactive. we generate a CL with all inactive users +active_threshold = (datetime.now() - timedelta(3)).date() + +#key is associated survey, and value is actual file name; the qualitative feedback files are summarized by Sherry and are +# housed in Confidential +qualitative_feedback_files = {"PDBug":"PDBug_QualitativeFeedback.csv", + "Midline":"Midline_QualitativeFeedback_LimitBug_and_MPLReasoning.csv", + "Endline1":"Endline1_QualitativeFeedback_LimitBug.csv", + "Endline2":"Endline2_QualitativeFeedback_LimitBug.csv"} + + +# - these columns will not contain a survey prefix attached to them because they have common values across all surveys. +# - they will not be removed as embedded data during survey cleaning +main_cols = [ + "FirstName", + "AppCode", + "MainEmail", + "PhoneNumber", +] + +## non prefixed columns that get introduced into the main dataset through used CLs or through the cleaning pipeline +# these variables ARE dropped during survey cleaning +embedded_main_cols = [ + "OptedOut", + "ActiveStatus", + "Server", + "PhoneModel", + "PlatformVersion", + "AppVersion", + "SawLimitSettingPage", + "HasSetLimit", + "Randomize", + "StratAddictionIndex", + "StratRestrictionIndex", + "Stratifier", + "BonusTreatment", + "EndlineBlockerChange", + "EndlineBlockerMPLRow", + "InitialPaymentPart1", + "InitialPaymentPart2", + "HourlyRate", + "Benchmark", + "HourOrHours", + "PredictReward", + "MaxEarnings", +] + +#Survey specific embedded data we want to keep for merge to master +kept_survey_data = ["Complete", + "SurveyStartDatetime", + "SurveyEndDatetime", + "MPLOption", + "RSIStatus", + "BlockerEarnings" + ] + +""" +The id_cols dictionary dictates which variables to anonymize. For each survey (the key), confidential.py (in lib/data_helpers) +will replace the columns (the values in the id_cols dictionary) with the user's AppCode. If the user doesn't have an +AppCode, then the column value will be replaced with the survey response id from the study_config.initial_master_survey. + +Note: + - the list of variables in DeleteAlways key will be deleted from every survey + - confidential.py will do a soft match to the id_cols key. So, if the survey name is RecruitmentFacebook, + confidential key will anonymize columns according to the 'Recruitment' key in id_cols""" +id_cols = {"Recruitment": + ["MainEmail", + "FirstName", + "Email", + "Email.1", + "EmailConfirm", + "PhoneNumber", + "PhoneNumberConfirm", + "PhoneNumber.1" + ], + + "Baseline": + ["FriendContact"], + + "DeleteAlways": + ["LocationLatitude", + "LocationLongitude", + "IPAddress"]} + diff --git a/17/replication_package/code/lib/experiment_specs/value_labels.yaml b/17/replication_package/code/lib/experiment_specs/value_labels.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ae0089e83f4aa8b52bc7bca78eb356d4766e0fb5 --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/value_labels.yaml @@ -0,0 +1,83 @@ +# SPECIFIES WHICH VARIABLES SHOULD BE ENCODED AND HOW + +Education: + + VariableList: + - MotherEducation + - FatherEducation + + ValueLabels: + Completed grade school or less: 1.0 + Some high school: 2.0 + Completed high school: 3.0 + Some college: 4.0 + Completed college: 6.0 + Graduate or professional school after college: 7.0 + Don't know or does not apply: 8.0 + +Interest: + + VariableList: + - InterestInLimits + + ValueLabels: + Not at all interested: 0 + Slightly interested: 1 + Moderately interested: 2 + Very interested: 3 + +Frequency: + + VariableList: + - Addiction11 + - Addiction12 + - Addiction13 + - Addiction14 + - Addiction21 + - Addiction22 + - Addiction23 + - Addiction24 + - Addiction31 + - Addiction32 + - Addiction33 + - Addiction34 + - Addiction41 + - Addiction42 + - Addiction43 + - Addiction44 + + ValueLabels: + Never: 0 + Rarely: 0.25 + Sometimes: 0.5 + Often: 0.75 + Always: 1 + +UseChange: + + VariableList: + - IdealApp1 + - IdealApp2 + - IdealApp3 + - IdealApp4 + - IdealApp5 + - IdealApp6 + - IdealApp7 + - IdealApp8 + - IdealApp9 + - IdealApp10 + - IdealApp11 + - IdealApp12 + - IdealApp13 + - IdealApp14 + + ValueLabels: + ">50% more": 75 + 25-50% more: 37.5 + 1-25% more: 12.5 + the same as I do now: 0 + 1-25% less: -12.5 + 25-50% less: -37.5 + ">50% less": -75 + I don't use this app at all: -1 + diff --git a/17/replication_package/code/lib/experiment_specs/varsets.py b/17/replication_package/code/lib/experiment_specs/varsets.py new file mode 100644 index 0000000000000000000000000000000000000000..0a9f98ef611c35fe1c861cf863f299eda3f8089e --- /dev/null +++ b/17/replication_package/code/lib/experiment_specs/varsets.py @@ -0,0 +1,20 @@ +from lib.outcome_index.outcome_index import OutcomeIndex + +#indices used during stratification for treatment assignment. We construct these outside the main index construction function +# because the main function normalizes on the control group values at endline. +stratification_indices = ["RestrictionIndex","AddictionIndex"] + +index_class_dict ={ + "RestrictionIndex" : + OutcomeIndex(name="RestrictionIndex", + pos_outcomes=["InterestInLimits"], + neg_outcomes=["IdealUseChange"], + moderators=[]), + + "AddictionIndex": + OutcomeIndex(name="AddictionIndex", + pos_outcomes=["AddictionAvg",], + neg_outcomes=["LifeBetter"], + moderators=[]), + +} \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/__init__.py b/17/replication_package/code/lib/gslab_make/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..3b5eb13d25ceeb1e0b727f6a3a9ab0054327e401 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/__init__.py @@ -0,0 +1,57 @@ +#!/usr/bin/python +# -*- coding: latin-1 -*- + +from __future__ import absolute_import, division, print_function, unicode_literals +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +""" +======================================================= +gslab_make: a library of make.py and LyX filling tools +======================================================= + +Description: +`make.py` is a Python script that facilitates running programs in batch mode. +`make.py` relies on functions in `gslab_make` which provide simple and +efficient commands that are portable across Unix and Windows. + +`gslab_make` also provides two functions for filling LyX templates with data. +These are `tablefill` and `textfill`. Please see their docstrings for further +detail on their use and functionalities. + +Prerequisites: +* Python 2.7/3.7 installed and executable path is added to system path + +To use functions in this library that call applications other than Python, +you must have the application installed with its executable path added to the +system path or defined as an environment variable/symbolic link. +This remark applies to: Matlab, Stata, Perl, Mathematica 8.0 (the math kernel +path must be added to system path), StatTransfer, LyX, R, and SAS. + +Notes: +* Default parameters, options, and executables used in `make.py` scripts are + defined in `/private/metadata.py`. The file extensions associated with + various applications are also defined in this file. +* For further detail on functions in `gslab_make`, refer to their docstrings + or the master documentation. +""" + +# Import make tools +from gslab_make.check_repo import check_module_size, get_modified_sources +from gslab_make.modify_dir import remove_dir, clear_dir, unzip, zip_dir +from gslab_make.move_sources import (link_inputs, link_externals, + copy_inputs, copy_externals) +from gslab_make.run_program import (run_stata, run_matlab, run_perl, run_python, + run_jupyter, run_mathematica, run_stat_transfer, + run_lyx, run_latex, run_r, run_sas, + execute_command, run_module) + +from gslab_make.make_utility import (update_executables, update_paths, copy_output) +from gslab_make.write_logs import (start_makelog, end_makelog, write_to_makelog, + log_files_in_output) +from gslab_make.write_source_logs import write_source_logs + + +# Import fill tools +from gslab_make.tablefill import tablefill +from gslab_make.textfill import textfill diff --git a/17/replication_package/code/lib/gslab_make/__init__.pyc b/17/replication_package/code/lib/gslab_make/__init__.pyc new file mode 100644 index 0000000000000000000000000000000000000000..427344fb1a6e05513179231f178afbb9f5497b65 Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/__init__.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/check_repo.py b/17/replication_package/code/lib/gslab_make/check_repo.py new file mode 100644 index 0000000000000000000000000000000000000000..35e6498c82045271ca80fd7d85bb1b40e521b9ab --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/check_repo.py @@ -0,0 +1,380 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import re +import git +import fnmatch +import traceback + +from termcolor import colored +import colorama +colorama.init() + +import gslab_make.private.metadata as metadata +import gslab_make.private.messages as messages +from gslab_make.private.exceptionclasses import CritError, ColoredError +from gslab_make.private.utility import norm_path, get_path, format_message, glob_recursive, open_yaml +from gslab_make.write_logs import write_to_makelog + + +def _get_file_sizes(dir_path, exclude): + """.. Walk through directory and get file sizes. + + Get file sizes for files in directory ``dir_path``, ignoring subdirectories in list ``exclude``. + + Parameters + ---------- + dir_path : str + Path of directory to walk through. + exclude : list + List of subdirectories to exclude when walking. + + Returns + ------- + file_size : dict + Dictionary of ``{file : size}`` for each file in ``dir_path``. + """ + + file_sizes = [] + + for root, dirs, files in os.walk(dir_path, topdown = True): + dirs[:] = [d for d in dirs if d not in exclude] + + files = [os.path.join(root, f) for f in files] + files = [norm_path(f) for f in files] + sizes = [os.lstat(f).st_size for f in files] + file_sizes.extend(zip(files, sizes)) + + file_sizes = dict(file_sizes) + + return(file_sizes) + + +def _get_git_ignore(repo): + """.. Get files ignored by git. + + Get files ignored by git for repository ``repo``. + + Parameters + ---------- + repo : :class:`git.Repo` + Git repository to get ignored files. + + Returns + ------- + ignore_files : list + List of files in repository ignored by git. + """ + + g = git.Git(repo) + root = repo.working_tree_dir + + ignore = g.execute('git status --porcelain --ignored', shell = True).split('\n') + ignore = [i for i in ignore if re.match('!!', i)] + ignore = [i.lstrip('!!').strip() for i in ignore] + ignore = [os.path.join(root, i) for i in ignore] + + ignore_files = [] + + for i in ignore: + if os.path.isfile(i): + ignore_files.append(i) + elif os.path.isdir(i): + for root, dirs, files in os.walk(i): + files = [os.path.join(root, f) for f in files] + ignore_files.extend(files) + + ignore_files = [norm_path(i) for i in ignore_files] + + return(ignore_files) + + +def _parse_git_attributes(attributes): + """.. Get git lfs patterns from git attributes. + + Get git lfs patterns from file ``attributes``. + + Parameters + ---------- + attributes : str + Path of git attributes file. + + Returns + ------- + lfs_list: list + List of patterns to determine files tracked by git lfs. + """ + + try: + with open(attributes) as f: + attributes_list = f.readlines() + + lfs_regex = 'filter=lfs( )+diff=lfs( )+merge=lfs( )+-text' + lfs_list = [l for l in attributes_list if re.search(lfs_regex, l)] + lfs_list = [l.split()[0] for l in lfs_list] + + return(lfs_list) + except IOError: + raise_from(CritError(messages.crit_error_no_attributes), None) + + +def _check_path_lfs(path, lfs_list): + """.. Check if file matches git lfs patterns.""" + + for l in lfs_list: + if fnmatch.fnmatch(path, l): + return(True) + + return(False) + + +def _get_dir_sizes(dir_path): + """.. Get file sizes for directory. + + Get file sizes for files in directory ``dir_path``. + + Parameters + ---------- + dir_path : str + Path of directory to get file sizes. + + Returns + ------- + git_files : dict + Dictionary of ``{file : size}`` for each file tracked by git. + git_lfs_files : dict + Dictionary of ``{file : size}`` for each file tracked by git lfs. + """ + + try: + repo = git.Repo(dir_path, search_parent_directories = True) + root = repo.working_tree_dir + except: + raise_from(CritError(messages.crit_error_no_repo), None) + + git_files = _get_file_sizes(dir_path, exclude = ['.git']) + git_ignore_files = _get_git_ignore(repo) + + for ignore in git_ignore_files: + try: + git_files.pop(ignore) + except KeyError: + pass + + lfs_list = _parse_git_attributes(os.path.join(root, '.gitattributes')) + git_lfs_files = dict() + + for key in list(git_files.keys()): + if _check_path_lfs(key, lfs_list): + git_lfs_files[key] = git_files.pop(key) + + return(git_files, git_lfs_files) + + +def _get_size_values(git_files, git_lfs_files): + """.. Get file sizes for repository. + + Get file sizes for files in dictionary ``git_files`` and dictionary ``git_lfs_files``. + + Parameters + ---------- + git_files : dict + Dictionary of ``{file : size}`` for each file tracked by git. + git_lfs_files : dict + Dictionary of ``{file : size}`` for each file tracked by git lfs. + + Returns + ------- + file_MB : float + Size of largest file tracked by git in megabytes. + total_MB : float + Total size of files tracked by git in megabytes. + file_MB : float + Size of largest file tracked by git lfs in megabytes. + total_MB : float + Total size of files tracked by git lfs in megabytes. + """ + + file_MB = max(git_files.values() or [0]) + total_MB = sum(git_files.values() or [0]) + file_MB_lfs = max(git_lfs_files.values() or [0]) + total_MB_lfs = sum(git_lfs_files.values() or [0]) + + size_list = [file_MB, total_MB, file_MB_lfs, total_MB_lfs] + size_list = [size / (1024 ** 2) for size in size_list] + + return(size_list) + + +def check_module_size(paths): + """.. Check file sizes for module. + + Checks file sizes for files to be committed in the current working directory. + Compares file sizes to size limits in file ``config`` and + produces warnings if any of the following limits are exceeded. + + - Individual size of a file tracked by git lfs (``file_MB_limit_lfs``) + - Total size of all files tracked by git lfs (``total_MB_limit_lfs``) + - Individual size of a file tracked by git (``file_MB_limit``) + - Total size of all files tracked by git (``total_MB_limit``) + + Warning messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + + Path Keys + --------- + config : str + Path of project configuration file. + makelog : str + Path of makelog. + + Returns + ------- + None + """ + + try: + git_files, git_lfs_files = _get_dir_sizes('.') + file_MB, total_MB, file_MB_lfs, total_MB_lfs = _get_size_values(git_files, git_lfs_files) + + config = get_path(paths, 'config') + config = open_yaml(config) + max_file_sizes = config['max_file_sizes'] + + print_message = '' + if file_MB > max_file_sizes['file_MB_limit']: + print_message = print_message + messages.warning_git_file_print % max_file_sizes['file_MB_limit'] + if total_MB > max_file_sizes['total_MB_limit']: + print_message = print_message + messages.warning_git_repo % max_file_sizes['total_MB_limit'] + if file_MB_lfs > max_file_sizes['file_MB_limit_lfs']: + print_message = print_message + messages.warning_git_lfs_file_print % max_file_sizes['file_MB_limit_lfs'] + if total_MB_lfs > max_file_sizes['total_MB_limit_lfs']: + print_message = print_message + messages.warning_git_lfs_repo % max_file_sizes['total_MB_limit_lfs'] + print_message = print_message.strip() + + log_message = '' + if file_MB > max_file_sizes['file_MB_limit']: + log_message = log_message + messages.warning_git_file_log % max_file_sizes['file_MB_limit'] + exceed_files = [f for (f, s) in git_files.items() if s / (1024 ** 2) > max_file_sizes['file_MB_limit']] + exceed_files = '\n'.join(exceed_files) + log_message = log_message + '\n' + exceed_files + if total_MB > max_file_sizes['total_MB_limit']: + log_message = log_message + messages.warning_git_repo % max_file_sizes['total_MB_limit'] + if file_MB_lfs > max_file_sizes['file_MB_limit_lfs']: + log_message = log_message + messages.warning_git_lfs_file_log % max_file_sizes['file_MB_limit_lfs'] + exceed_files = [f for (f, s) in git_lfs_files.items() if s / (1024 ** 2) > max_file_sizes['file_MB_limit_lfs']] + exceed_files = '\n'.join(exceed_files) + log_message = log_message + '\n' + exceed_files + if total_MB_lfs > max_file_sizes['total_MB_limit_lfs']: + log_message = log_message + messages.warning_git_lfs_repo % max_file_sizes['total_MB_limit_lfs'] + log_message = log_message.strip() + + if print_message: + print(colored(print_message, metadata.color_failure)) + if log_message: + write_to_makelog(paths, log_message) + except: + error_message = 'Error with `check_repo_size`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def _get_git_status(repo): + """.. Get git status. + + Get git status for repository ``repo``. + + Parameters + ---------- + repo : :class:`git.Repo ` + Git repository to show working tree status. + + Returns + ------- + file_list : list + List of changed files in git repository according to git status. + """ + + root = repo.working_tree_dir + + file_list = repo.git.status(porcelain = True) + file_list = file_list.split('\n') + file_list = [f.lstrip().lstrip('MADRCU?!').lstrip() for f in file_list] + file_list = [os.path.join(root, f) for f in file_list] + file_list = [norm_path(f) for f in file_list] + + return(file_list) + + +def get_modified_sources(paths, + source_map, + depth = float('inf')): + """.. Get source files considered changed by git. + + Checks the modification status for all sources contained in list + ``source_map`` (returned by :ref:`sourcing functions`). + Produces warning if sources have been modified according to git. + When walking through sources, float ``depth`` determines level of depth to walk. + Warning messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + source_map : list + Mapping of sources (returned from :ref:`sourcing functions`). + depth : float, optional + Level of depth when walking through source directories. Defaults to infinite. + + Path Keys + --------- + makelog : str + Path of makelog. + + Returns + ------- + overlap : list + List of source files considered changed by git. + + Notes + ----- + + """ + + try: + source_list = [source for source, destination in source_map] + source_list = [glob_recursive(source, depth) for source in source_list] + source_files = [f for source in source_list for f in source] + source_files = set(source_files) + + try: + repo = git.Repo('.', search_parent_directories = True) + except: + raise_from(CritError(messages.crit_error_no_repo), None) + modified = _get_git_status(repo) + + overlap = [l for l in source_files if l in modified] + + if overlap: + if len(overlap) > 100: + overlap = overlap[0:100] + overlap = overlap + ["and more (file list truncated due to length)"] + message = messages.warning_modified_files % '\n'.join(overlap) + write_to_makelog(paths, message) + print(colored(message, metadata.color_failure)) + except: + error_message = 'Error with `get_modified_sources`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + +__all__ = ['check_module_size', 'get_modified_sources'] diff --git a/17/replication_package/code/lib/gslab_make/check_repo.pyc b/17/replication_package/code/lib/gslab_make/check_repo.pyc new file mode 100644 index 0000000000000000000000000000000000000000..a54c03a4a1ce62ac4def6c0e4d694e3e6b7b51e8 Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/check_repo.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/make_utility.py b/17/replication_package/code/lib/gslab_make/make_utility.py new file mode 100644 index 0000000000000000000000000000000000000000..777dfde5722574594bde966aede36c185125a122 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/make_utility.py @@ -0,0 +1,149 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import shutil +import traceback + +from termcolor import colored +import colorama +colorama.init() + +import gslab_make.private.messages as messages +import gslab_make.private.metadata as metadata +from gslab_make.private.exceptionclasses import CritError, ColoredError +from gslab_make.private.utility import get_path, format_message, norm_path, open_yaml + + +def _check_os(osname = os.name): + """Check OS is either POSIX or NT. + + Parameters + ---------- + osname : str, optional + Name of OS. Defaults to ``os.name``. + + Returns + ------- + None + """ + + if osname not in ['posix', 'nt']: + raise CritError(messages.crit_error_unknown_system % osname) + + +def update_executables(paths, osname = None): + """.. Update executable names using user configuration file. + + Updates executable names with executables listed in file ``config_user``. + + Note + ---- + Executable names are used by :ref:`program functions `. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + osname : str, optional + Name of OS. Defaults to ``os.name``. + + Path Keys + --------- + config_user : str + Path of user configuration file. + + Returns + ------- + None + """ + + osname = osname if osname else os.name # https://github.com/sphinx-doc/sphinx/issues/759 + + try: + config_user = get_path(paths, 'config_user') + config_user = open_yaml(config_user) + + _check_os(osname) + + if config_user['local']['executables']: + metadata.default_executables[osname].update(config_user['local']['executables']) + except: + error_message = 'Error with update_executables. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def update_paths(paths): + """.. Update paths using user configuration file. + + Updates dictionary ``paths`` with externals listed in file ``config_user``. + + Note + ---- + The ``paths`` argument for :ref:`sourcing functions` is used not only to get + default paths for writing/logging, but also to + `string format `__ + sourcing instructions. + + Parameters + ---------- + paths : dict + Dictionary of paths to update. + Dictionary should ex-ante contain values for all keys listed below. + + Path Keys + --------- + config_user : str + Path of user configuration file. + + Returns + ------- + paths : dict + Dictionary of updated paths. + """ + + try: + config_user = get_path(paths, 'config_user') + config_user = open_yaml(config_user) + + if config_user['external']: + paths.update(config_user['external']) + + return(paths) + except: + error_message = 'Error with update_paths. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def copy_output(file, copy_dir): + """.. Copy output file. + + Copies output ``file`` to directory ``copy_dir`` with user prompt to confirm copy. + + Parameters + ---------- + file : str + Path of file to copy. + copy_dir : str + Directory to copy file. + + Returns + ------- + None + """ + + file = norm_path(file) + copy_dir = norm_path(copy_dir) + message = colored(messages.warning_copy, color = 'cyan') + upload = input(message % (file, copy_dir)) + + if upload.lower().strip() == "yes": + shutil.copy(file, copy_dir) + + +__all__ = ['update_executables', 'update_paths', 'copy_output'] diff --git a/17/replication_package/code/lib/gslab_make/make_utility.pyc b/17/replication_package/code/lib/gslab_make/make_utility.pyc new file mode 100644 index 0000000000000000000000000000000000000000..31ca26c8c1cecc93283f0c9ad5341b949279289b Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/make_utility.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/modify_dir.py b/17/replication_package/code/lib/gslab_make/modify_dir.py new file mode 100644 index 0000000000000000000000000000000000000000..e322a4beb48e0ac98548df78137715742ce65de1 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/modify_dir.py @@ -0,0 +1,251 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import sys +import glob +import zipfile +import traceback +import subprocess + +if (sys.version_info < (3, 0)) and (os.name == 'nt'): + import gslab_make.private.subprocess_fix as subprocess_fix +else: + import subprocess as subprocess_fix + +from termcolor import colored +import colorama +colorama.init() + +import gslab_make.private.metadata as metadata +import gslab_make.private.messages as messages +from gslab_make.private.exceptionclasses import ColoredError +from gslab_make.private.utility import convert_to_list, norm_path, format_message + + +def remove_path(path, option = '', quiet = False): + """.. Remove path using system command. + + Remove path ``path`` using system command. Safely removes symbolic links. + Path can be specified with the * shell pattern + (see `here `__). + + Parameters + ---------- + path : str + Path to remove. + option : str, optional + Options for system command. Defaults to ``-rf`` for POSIX and ``/s /q`` for NT. + quiet : bool, optional + Suppress printing of path removed. Defaults to ``False``. + + Returns + ------- + None + + Example + ------- + The following code removes path ``path``. + + .. code-block:: python + + remove_path('path') + + The following code removes all paths beginning with ``path``. + + .. code-block:: python + + remove_path('path*') + """ + + try: + path = norm_path(path) + if not option: + option = metadata.default_options[os.name]['rmdir'] + + command = metadata.commands[os.name]['rmdir'] % (option, path) + process = subprocess_fix.Popen(command, shell = True) + process.wait() + # ACTION ITEM: ADD DEBUGGING TO SUBPROCESS CALL + + if not quiet: + message = 'Removed: `%s`' % path + print(colored(message, metadata.color_success)) + except: + error_message = 'Error with `remove_path`. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def remove_dir(dir_list, quiet = False): + """.. Remove directory using system command. + + Remove directories in list ``dir_list`` using system command. + Safely removes symbolic links. Directories can be specified with the * shell pattern + (see `here `__). + + Parameters + ---------- + dir_list : str, list + Directory or list of directories to remove. + quiet : bool, optional + Suppress printing of directories removed. Defaults to ``False``. + + Returns + ------- + None + + Example + ------- + The following code removes directories ``dir1`` and ``dir2``. + + .. code-block:: python + + remove_dir(['dir1', 'dir2']) + + The following code removes directories beginning with ``dir``. + + .. code-block:: python + + remove_dir(['dir1*']) + """ + + try: + dir_list = convert_to_list(dir_list, 'dir') + dir_list = [norm_path(dir_path) for dir_path in dir_list] + dir_list = [d for directory in dir_list for d in glob.glob(directory)] + + for dir_path in dir_list: + if os.path.isdir(dir_path): + remove_path(dir_path, quiet = quiet) + elif os.path.isfile(dir_path): + raise_from(TypeError(messages.type_error_not_dir % dir_path), None) + except: + error_message = 'Error with `remove_dir`. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def clear_dir(dir_list): + """.. Clear directory. Create directory if nonexistent. + + Clears all directories in list ``dir_list`` using system command. + Safely clears symbolic links. Directories can be specified with the * shell pattern + (see `here `__). + + Note + ---- + To clear a directory means to remove all contents of a directory. + If the directory is nonexistent, the directory is created, + unless the directory is specified via shell pattern. + + Parameters + ---------- + dir_list : str, list + Directory or list of directories to clear. + + Returns + ------- + None + + Example + ------- + The following code clears directories ``dir1`` and ``dir2``. + + .. code-block:: python + + clear_dir(['dir1', 'dir2']) + + The following code clears directories beginning with ``dir``. + + .. code-block:: python + + clear_dir(['dir*']) + """ + + try: + dir_list = convert_to_list(dir_list, 'dir') + dir_glob = [] + + for dir_path in dir_list: + expand = glob.glob(dir_path) + expand = expand if expand else [dir_path] + dir_glob.extend(expand) + + remove_dir(dir_glob, quiet = True) + + for dir_path in dir_glob: + os.makedirs(dir_path) + message = 'Cleared: `%s`' % dir_path + print(colored(message, metadata.color_success)) + except: + error_message = 'Error with `clear_dir`. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def unzip(zip_path, output_dir): + """.. Unzip file to directory. + + Unzips file ``zip_path`` to directory ``output_dir``. + + Parameters + ---------- + zip_path : str + Path of file to unzip. + output_dir : str + Directory to write outputs of unzipped file. + + Returns + ------- + None + """ + + try: + with zipfile.ZipFile(zip_path, allowZip64 = True) as z: + z.extractall(output_dir) + except: + error_message = 'Error with `zip_path`. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def zip_dir(source_dir, zip_dest): + """.. Zip directory to file. + + Zips directory ``source_dir`` to file ``zip_dest``. + + Parameters + ---------- + source_dir : str + Path of directory to zip. + zip_dest : str + Destination of zip file. + + Returns + ------- + None + """ + + try: + with zipfile.ZipFile('%s' % (zip_dest), 'w', zipfile.ZIP_DEFLATED, allowZip64 = True) as z: + source_dir = norm_path(source_dir) + + for root, dirs, files in os.walk(source_dir): + for f in files: + file_path = os.path.join(root, f) + file_name = os.path.basename(file_path) + z.write(file_path, file_name) + + message = 'Zipped: `%s` as `%s`' % (file_path, file_name) + print(colored(message, metadata.color_success)) + except: + error_message = 'Error with `zip_dir`. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +__all__ = ['remove_dir', 'clear_dir', 'unzip', 'zip_dir'] \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/modify_dir.pyc b/17/replication_package/code/lib/gslab_make/modify_dir.pyc new file mode 100644 index 0000000000000000000000000000000000000000..508733f7f477205f0614b2641dba227e565da526 Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/modify_dir.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/move_sources.py b/17/replication_package/code/lib/gslab_make/move_sources.py new file mode 100644 index 0000000000000000000000000000000000000000..30094dac8261f06535d807f82aa7fe5b62e065c7 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/move_sources.py @@ -0,0 +1,618 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import traceback + +from termcolor import colored +import colorama +colorama.init() + +import gslab_make.private.metadata as metadata +from gslab_make.private.exceptionclasses import ColoredError +from gslab_make.private.movedirective import MoveList +from gslab_make.private.utility import get_path, format_message +from gslab_make.write_logs import write_to_makelog + + +def _create_links(paths, + file_list): + """.. Create symlinks from list of files containing linking instructions. + + Create symbolic links using instructions contained in files of list ``file_list``. + Instructions are `string formatted `__ + using paths dictionary ``paths``. Symbolic links are written in directory ``move_dir``. + Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + Dictionary additionally used to string format linking instructions. + file_list : str, list + File or list of files containing linking instructions. + + Path Keys + --------- + move_dir : str + Directory to write links. + makelog : str + Path of makelog. + + Returns + ------- + source_map : list + List of (source, destination) for each symlink created. + """ + + move_dir = get_path(paths, 'move_dir') + + move_list = MoveList(file_list, move_dir, paths) + if move_list.move_directive_list: + os.makedirs(move_dir) + source_map = move_list.create_symlinks() + else: + source_map = [] + + return(source_map) + + +def _create_copies(paths, + file_list): + """.. Create copies from list of files containing copying instructions. + + Create copies using instructions contained in files of list ``file_list``. + Instructions are `string formatted `__ + using paths dictionary ``paths``. Copies are written in directory ``move_dir``. + Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + Dictionary additionally used to string format copying instructions. + file_list : str, list + File or list of files containing copying instructions. + + Path Keys + --------- + move_dir : str + Directory to write copies. + makelog : str + Path of makelog. + + Returns + ------- + source_map : list + List of (source, destination) for each copy created. + """ + + move_dir = get_path(paths, 'move_dir') + + move_list = MoveList(file_list, move_dir, paths) + if move_list.move_directive_list: + os.makedirs(move_dir) + source_map = move_list.create_copies() + else: + source_map = [] + + return(source_map) + + +def link_inputs(paths, + file_list): + """.. Create symlinks to inputs from list of files containing linking instructions. + + Create symbolic links using instructions contained in files of list ``file_list``. + Instructions are `string formatted `__ + using paths dictionary ``paths``. Symbolic links are written in directory ``input_dir``. + Status messages are appended to file ``makelog``. + + Instruction files on how to create symbolic links (destinations) from targets (sources) + should be formatted in the following way. + + .. code-block:: md + + # Each line of instruction should contain a destination and source delimited by a `|` + # Lines beginning with # are ignored + destination | source + + .. Note:: + Symbolic links can be created to both files and directories. + + .. Note:: + Instruction files can be specified with the * shell pattern + (see `here `__). + Destinations and their sources can also be specified with the * shell pattern. + The number of wildcards must be the same for both destinations and sources. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + Dictionary additionally used to string format linking instructions. + file_list : str, list + File or list of files containing linking instructions. + + Path Keys + --------- + input_dir : str + Directory to write symlinks. + makelog : str + Path of makelog. + + Returns + ------- + source_map : list + List of (source, destination) for each symlink created. + + Example + ------- + Suppose you call the following function. + + .. code-block:: python + + link_inputs(paths, ['file1'], formatting_dict) + + Suppose ``paths`` contained the following values. + + .. code-block:: md + + paths = {'root': '/User/root/', + 'makelog': 'make.log', + 'input_dir': 'input'} + + Now suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | {root}/source1 + + The ``{root}`` in the instruction file would be string formatted using ``paths``. + Therefore, the function would parse the instruction as: + + .. code-block:: md + + destination1 | /User/root/source1 + + Example + ------- + The following code would use instruction files ``file1`` and ``file2`` to create symbolic links. + + .. code-block:: python + + link_inputs(paths, ['file1', 'file2']) + + Suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | source1 + destination2 | source2 + + Symbolic links ``destination1`` and ``destination1`` would be created in directory ``paths['input_dir']``. + Their targets would be ``source1`` and ``source2``, respectively. + + Example + ------- + Suppose you have the following targets. + + .. code-block:: md + + source1 + source2 + source3 + + Specifying ``destination* | source*`` in one of your instruction files would + create the following symbolic links in ``paths['input_dir']``. + + .. code-block:: md + + destination1 + destination2 + destination3 + """ + + try: + paths['move_dir'] = get_path(paths, 'input_dir') + source_map = _create_links(paths, file_list) + + message = 'Input links successfully created!' + write_to_makelog(paths, message) + print(colored(message, metadata.color_success)) + + return(source_map) + except: + error_message = 'An error was encountered with `link_inputs`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def link_externals(paths, + file_list): + """.. Create symlinks to externals from list of files containing linking instructions. + + Create symbolic links using instructions contained in files of list ``file_list``. + Instructions are `string formatted `__ + using paths dictionary ``paths``. Symbolic links are written in directory ``external_dir``. + Status messages are appended to file ``makelog``. + + Instruction files on how to create symbolic links (destinations) from targets (sources) + should be formatted in the following way. + + .. code-block:: md + + # Each line of instruction should contain a destination and source delimited by a `|` + # Lines beginning with # are ignored + destination | source + + .. Note:: + Symbolic links can be created to both files and directories. + + .. Note:: + Instruction files can be specified with the * shell pattern + (see `here `__). + Destinations and their sources can also be specified with the * shell pattern. + The number of wildcards must be the same for both destinations and sources. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + Dictionary additionally used to string format linking instructions. + file_list : str, list + File or list of files containing linking instructions. + + Path Keys + --------- + external_dir : str + Directory to write symlinks. + makelog : str + Path of makelog. + + Returns + ------- + source_map : list + List of (source, destination) for each symlink created. + + Example + ------- + Suppose you call the following function. + + .. code-block:: python + + link_externals(paths, ['file1'], formatting_dict) + + Suppose ``paths`` contained the following values. + + .. code-block:: md + + paths = {'root': '/User/root/', + 'makelog': 'make.log', + 'input_dir': 'input'} + + Now suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | {root}/source1 + + The ``{root}`` in the instruction file would be string formatted using ``paths``. + Therefore, the function would parse the instruction as: + + .. code-block:: md + + destination1 | /User/root/source1 + + Example + ------- + The following code would use instruction files ``file1`` and ``file2`` to create symbolic links. + + .. code-block:: python + + link_externals(paths, ['file1', 'file2']) + + Suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | source1 + destination2 | source2 + + Symbolic links ``destination1`` and ``destination1`` would be created in directory ``paths['external_dir']``. + Their targets would be ``source1`` and ``source2``, respectively. + + Example + ------- + Suppose you have the following targets. + + .. code-block:: md + + source1 + source2 + source3 + + Specifying ``destination* | source*`` in one of your instruction files would + create the following symbolic links in ``paths['external_dir']``. + + .. code-block:: md + + destination1 + destination2 + destination3 + """ + + try: + paths['move_dir'] = get_path(paths, 'external_dir') + source_map = _create_links(paths, file_list) + + message = 'External links successfully created!' + write_to_makelog(paths, message) + print(colored(message, metadata.color_success)) + + return(source_map) + except: + error_message = 'An error was encountered with `link_externals`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def copy_inputs(paths, + file_list): + """.. Create copies to inputs from list of files containing copying instructions. + + Create copies using instructions contained in files of list ``file_list``. + Instructions are `string formatted `__ + using paths dictionary ``paths``. Copies are written in directory ``input_dir``. + Status messages are appended to file ``makelog``. + + Instruction files on how to create copies (destinations) from targets (sources) + should be formatted in the following way. + + .. code-block:: md + + # Each line of instruction should contain a destination and source delimited by a `|` + # Lines beginning with # are ignored + destination | source + + .. Note:: + Instruction files can be specified with the * shell pattern + (see `here `__). + Destinations and their sources can also be specified with the * shell pattern. + The number of wildcards must be the same for both destinations and sources. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + Dictionary additionally used to string format copying instructions. + file_list : str, list + File or list of files containing copying instructions. + + Path Keys + --------- + input_dir : str + Directory to write copies. + makelog : str + Path of makelog. + + Returns + ------- + source_map : list + List of (source, destination) for each copy created. + + Example + ------- + Suppose you call the following function. + + .. code-block:: python + + copy_inputs(paths, ['file1'], formatting_dict) + + Suppose ``paths`` contained the following values. + + .. code-block:: md + + paths = {'root': '/User/root/', + 'makelog': 'make.log', + 'input_dir': 'input'} + + Now suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | {root}/source1 + + The ``{root}`` in the instruction file would be string formatted using ``paths``. + Therefore, the function would parse the instruction as: + + .. code-block:: md + + destination1 | /User/root/source1 + + Example + ------- + The following code would use instruction files ``file1`` and ``file2`` to create copies. + + .. code-block:: python + + copy_inputs(paths, ['file1', 'file2']) + + Suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | source1 + destination2 | source2 + + Copies ``destination1`` and ``destination1`` would be created in directory ``paths['input_dir']``. + Their targets would be ``source1`` and ``source2``, respectively. + + Example + ------- + Suppose you have the following targets. + + .. code-block:: md + + source1 + source2 + source3 + + Specifying ``destination* | source*`` in one of your instruction files would + create the following copies in ``paths['input_dir']``. + + .. code-block:: md + + destination1 + destination2 + destination3 + """ + + try: + paths['move_dir'] = get_path(paths, 'input_dir') + source_map = _create_copies(paths, file_list) + + message = 'Input copies successfully created!' + write_to_makelog(paths, message) + print(colored(message, metadata.color_success)) + + return(source_map) + except: + error_message = 'An error was encountered with `copy_inputs`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def copy_externals(paths, + file_list): + """.. Create copies to externals from list of files containing copying instructions. + + Create copies using instructions contained in files of list ``file_list``. + Instructions are `string formatted `__ + using paths dictionary ``paths``. Copies are written in directory ``external_dir``. + Status messages are appended to file ``makelog``. + + Instruction files on how to create copies (destinations) from targets (sources) + should be formatted in the following way. + + .. code-block:: md + + # Each line of instruction should contain a destination and source delimited by a `|` + # Lines beginning with # are ignored + destination | source + + .. Note:: + Instruction files can be specified with the * shell pattern + (see `here `__). + Destinations and their sources can also be specified with the * shell pattern. + The number of wildcards must be the same for both destinations and sources. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + Dictionary additionally used to string format copying instructions. + file_list : str, list + File or list of files containing copying instructions. + + Path Keys + --------- + external_dir : str + Directory to write copies. + makelog : str + Path of makelog. + + Returns + ------- + source_map : list + List of (source, destination) for each copy created. + + Example + ------- + Suppose you call the following function. + + .. code-block:: python + + copy_externals(paths, ['file1'], formatting_dict) + + Suppose ``paths`` contained the following values. + + .. code-block:: md + + paths = {'root': '/User/root/', + 'makelog': 'make.log', + 'input_dir': 'input'} + + Now suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | {root}/source1 + + The ``{root}`` in the instruction file would be string formatted using ``paths``. + Therefore, the function would parse the instruction as: + + .. code-block:: md + + destination1 | /User/root/source1 + + Example + ------- + The following code would use instruction files ``file1`` and ``file2`` to create copies. + + .. code-block:: python + + copy_externals(paths, ['file1', 'file2']) + + Suppose instruction file ``file1`` contained the following text. + + .. code-block:: md + + destination1 | source1 + destination2 | source2 + + Copies ``destination1`` and ``destination1`` would be created in directory ``paths['external_dir']``. + Their targets would be ``source1`` and ``source2``, respectively. + + Example + ------- + Suppose you have the following targets. + + .. code-block:: md + + source1 + source2 + source3 + + Specifying ``destination* | source*`` in one of your instruction files would + create the following copies in ``paths['external_dir']``. + + .. code-block:: md + + destination1 + destination2 + destination3 + """ + + try: + paths['move_dir'] = get_path(paths, 'external_dir') + source_map = _create_copies(paths, file_list) + + message = 'External copies successfully created!' + write_to_makelog(paths, message) + print(colored(message, metadata.color_success)) + + return(source_map) + except: + error_message = 'An error was encountered with `copy_externals`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + +__all__ = ['link_inputs', 'link_externals', 'copy_inputs', 'copy_externals'] \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/move_sources.pyc b/17/replication_package/code/lib/gslab_make/move_sources.pyc new file mode 100644 index 0000000000000000000000000000000000000000..bde469af571f617cefb3cee319d2199e2d15f59c Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/move_sources.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/private/__init__.py b/17/replication_package/code/lib/gslab_make/private/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/17/replication_package/code/lib/gslab_make/private/__init__.pyc b/17/replication_package/code/lib/gslab_make/private/__init__.pyc new file mode 100644 index 0000000000000000000000000000000000000000..f3a8118108b629551fab56e6abfc4477c451c5b4 Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/private/__init__.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/private/exceptionclasses.py b/17/replication_package/code/lib/gslab_make/private/exceptionclasses.py new file mode 100644 index 0000000000000000000000000000000000000000..61236485730f321c4ae57b6ef53def8e44b48756 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/private/exceptionclasses.py @@ -0,0 +1,61 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import sys +import codecs + +from termcolor import colored +import colorama +colorama.init() + +import gslab_make.private.metadata as metadata + +""" +For some fixes Exception printing and I have no idea why... +""" + +import subprocess +process = subprocess.Popen('', shell = True) +process.wait() + +def decode(string): + """Decode string.""" + + if (sys.version_info < (3, 0)) and isinstance(string, string_types): + string = codecs.decode(string, 'latin1') + + return(string) + + +def encode(string): + """Clean string for encoding.""" + + if (sys.version_info < (3, 0)) and isinstance(string, unicode): + string = codecs.encode(string, 'utf-8') + + return(string) + + +class CritError(Exception): + pass + +class ColoredError(Exception): + """Colorized error messages.""" + + def __init__(self, message = '', trace = ''): + if message: + message = decode(message) + message = '\n\n' + colored(message, color = metadata.color_failure) + if trace: + trace = decode(trace) + message += '\n\n' + colored(trace, color = metadata.color_failure) + + super(ColoredError, self).__init__(encode(message)) + +class ProgramError(ColoredError): + """Program execution exception.""" + + pass \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/private/exceptionclasses.pyc b/17/replication_package/code/lib/gslab_make/private/exceptionclasses.pyc new file mode 100644 index 0000000000000000000000000000000000000000..560c15b92e8782211b355452c66570c4269ffd68 Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/private/exceptionclasses.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/private/messages.py b/17/replication_package/code/lib/gslab_make/private/messages.py new file mode 100644 index 0000000000000000000000000000000000000000..2f4e3c2bf21b5051c716456d004bf8baeb03dbe6 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/private/messages.py @@ -0,0 +1,132 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + + +# ~~~~~~~~~~~~~~~ # +# Define messages # +# ~~~~~~~~~~~~~~~ # + +# Critical errors +crit_error_unknown_system = \ + '\nERROR! Your operating system `%s` is unknown. `gslab_make` only supports the following operating systems: `posix`, `nt`.' +crit_error_no_makelog = \ + '\nERROR! Makelog `%s` cannot be found. ' + \ + 'This could be for the following reasons:\n' + \ + '- Makelog was not started (via `start_makelog`)\n' + \ + '- Makelog ended (via `end_makelog`) prematurely\n' + \ + '- Makelog deleted or moved after started' + +# ACTION ITEM: `end_makelog` CURRENTLY DOESN'T ACTUALLY TURN MAKE LOG STATUS OFF + +crit_error_no_program_output = \ + '\nERROR! Program output `%s` is expected from `%s` but cannot be found. ' + \ + 'Certain applications (`matlab`, `sas`, `stata`) automatically create program outputs when run using system command. ' + \ + '`gslab_make` attempts to migrate these program outputs appropriately. ' + \ + 'For further detail, refer to the traceback below.' +crit_error_no_key = \ + '\nERROR! Argument `paths` is missing a value for key `%s`. ' + \ + 'Add a path for `%s` to your paths dictionary.' +crit_error_no_file = \ + '\nERROR! File `%s` cannot be found.' +crit_error_no_files = \ + '\nERROR! Files matching pattern `%s` cannot be found.' +crit_error_no_path = \ + '\nERROR! Path `%s` cannot be found.' +crit_error_no_path_wildcard = \ + '\nERROR! Paths matching pattern `%s` cannot be found.' +crit_error_no_attributes = \ + '\nERROR! Cannot open git attributes file for repository. Confirm that repository has git attributes file.' +crit_error_bad_command = \ + '\nERROR! The following command cannot be executed by operating system.\n' + \ + ' > %s\n' + \ + 'This could be because the command may be misspecified or does not exist. ' + \ + 'For further detail, refer to the traceback below.' +crit_error_bad_move = \ + '\nERROR! An error was encountered attempting to link/copy with the following instruction in file `%s`.\n' + \ + ' > %s\n' + \ + 'Link/copy instructions should be specified in the following format:\n' + \ + ' > destination | source\n' + \ + 'For further detail, refer to the traceback below.' +crit_error_move_command = \ + '\nERROR! The following command cannot be executed by operating system.\n' + \ + ' > %s\n' + \ + 'Check permissions and if on Windows, run as administrator. ' + \ + 'For further detail, refer to the traceback below.' +crit_error_extension = \ + '\nERROR! Program `%s` does not have correct extension. ' + \ + 'Program should have one of the following extensions: %s.' +crit_error_path_mapping = \ + '\nERROR! Argument `paths` is missing a value for key `%s`. ' + \ + '`{%s}` found in the following instruction in file `%s`.\n' + \ + ' > %s\n' + \ + 'Confirm that your config user file contains an external dependency for {%s} and that it has been properly loaded (via `update_paths`). ' + \ + 'For further detail, refer to the traceback below.' +crit_error_no_repo = \ + '\nERROR! Current working directory is not part of a git repository.' +crit_error_not_float = \ + '\nERROR! You are attempting to round or format a value (`%s`) that is not a number.' +crit_error_no_input_table = \ + '\nERROR! None of the inputs match the tab name for table `%s`.' +crit_error_not_enough_values = \ + '\nERROR! Not enough values in input for table `%s`.' +crit_error_too_many_values = \ + '\nERROR! Too many values in input for table `%s`.' +crit_error_no_tag = \ + '\nERROR! Input `%s` is missing a tab name.' + +# Syntax errors +syn_error_wildcard = \ + '\nERROR! Destination and source must have same number of wildcards (`*`). ' + \ + 'Fix the following instruction in file `%s`.\n' + \ + ' > %s' + +# Type errors +type_error_file_list = \ + '\nERROR! Files `%s` must be specified as a list.' +type_error_dir_list = \ + '\nERROR! Directories `%s` must be specified as a list.' +type_error_not_dir = \ + '\nERROR! Path `%s` is not a directory.' + +# Warnings +warning_glob = \ + 'WARNING! No files were found for path `%s` when walking to a depth of `%s`.' +warning_lyx_type = \ + 'WARNING! Document type `%s` is unrecognized. ' + \ + 'Reverting to default of no special document type.' +warning_modified_files = \ + 'WARNING! The following target files have been modified according to git status:\n' + \ + '%s' +warning_git_file_print = \ + '\nWARNING! Certain files tracked by git exceed the config size limit (%s MB). ' + \ + 'See makelog for list of files.' +warning_git_file_log = \ + '\nWARNING! Certain files tracked by git exceed the config size limit (%s MB). ' + \ + 'See below for list of files.' +warning_git_repo = \ + '\nWARNING! Total size of files tracked by git exceed the repository config limit (%s MB).' +warning_git_lfs_file_print = \ + '\nWARNING! Certain files tracked by git-lfs exceed the config size limit (%s MB). ' + \ + 'See makelog for list of files.' +warning_git_lfs_file_log = \ + '\nWARNING! Certain files tracked by git-lfs exceed the config size limit (%s MB). ' + \ + 'See below for list of files.' +warning_git_lfs_repo = \ + '\nWARNING! Total size of files tracked by git-lfs exceed the repository config limit (%s MB).' +warning_copy = \ + 'To copy the following file, enter "Yes". Otherwise, enter "No". ' + \ + 'Update any archives and documentation accordingly.\n' + \ + '> %s\n' + \ + 'will be uploaded to\n' + \ + '> %s\n' + \ + 'Input: ' + +# Notes +note_makelog_start = 'Makelog started: ' +note_makelog_end = 'Makelog ended: ' +note_working_directory = 'Working directory: ' + +note_dash_line = '-' * 80 \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/private/messages.pyc b/17/replication_package/code/lib/gslab_make/private/messages.pyc new file mode 100644 index 0000000000000000000000000000000000000000..77cca9b74b7fc79fe25c1c201495c65631e30336 Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/private/messages.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/private/metadata.py b/17/replication_package/code/lib/gslab_make/private/metadata.py new file mode 100644 index 0000000000000000000000000000000000000000..1a83e023dc7b6918072384e3bdda1de485e0eadd --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/private/metadata.py @@ -0,0 +1,120 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +# ~~~~~~~~~~~~~~~ # +# Define metadata # +# ~~~~~~~~~~~~~~~ # + +makelog_started = False + +color_success = None +color_failure = 'red' +color_in_process = 'cyan' + +commands = { + 'posix': + {'makecopy' : 'cp -a \"%s\" \"%s\"', + 'makelink' : 'ln -s \"%s\" \"%s\"', + 'rmdir' : 'rm %s \"%s\"', + 'jupyter' : '%s nbconvert --ExecutePreprocessor.timeout=-1 %s \"%s\"', + 'lyx' : '%s %s \"%s\"', + 'latex' : '%s -output-directory=latex_auxiliary_dir %s \"%s\"', + 'math' : '%s < \"%s\" %s', + 'matlab' : '%s %s -r \"try run(\'%s\'); catch e, fprintf(getReport(e)), exit(1); end; exit(0)\" -logfile \"%s\"', + 'perl' : '%s %s \"%s\" %s', + 'python' : '%s %s \"%s\" %s', + 'r' : '%s %s \"%s\"', + 'sas' : '%s %s -log -print %s', + 'st' : '%s \"%s\"', + 'stata' : '%s %s do \\\"%s\\\"'}, + 'nt': + {'makecopy' : '%s xcopy /E /Y /Q /I /K \"%s\" \"%s\"', + 'makelink' : 'mklink %s \"%s\" \"%s\"', + 'rmdir' : 'rmdir %s \"%s\"', + 'jupyter' : '%s nbconvert --ExecutePreprocessor.timeout=-1 %s \"%s\"', + 'lyx' : '%s %s \"%s\"', + 'latex' : '%s -output-directory=latex_auxiliary_dir %s \"%s\"', + 'math' : '%s < \"%s\" %s', + 'matlab' : '%s %s -r \"try run(\'%s\'); catch e, fprintf(getReport(e)), exit(1); end; exit(0)\" -logfile \"%s\"', + 'perl' : '%s %s \"%s\" %s', + 'python' : '%s %s \"%s\" %s', + 'r' : '%s %s \"%s\"', + 'sas' : '%s %s -log -print %s', + 'st' : '%s \"%s\"', + 'stata' : '%s %s do \\\"%s\\\"'}, +} + +default_options = { + 'posix': + {'rmdir' : '-rf', + 'jupyter' : '--to notebook --inplace --execute', + 'lyx' : '-e pdf2', + 'latex' : '', + 'math' : '-noprompt', + 'matlab' : '-nosplash -nodesktop', + 'perl' : '', + 'python' : '', + 'r' : '--no-save', + 'sas' : '', + 'st' : '', + 'stata' : '-e'}, + 'nt': + {'rmdir' : '/s /q', + 'jupyter' : '--to notebook --inplace --execute', + 'lyx' : '-e pdf2', + 'latex' : '', + 'math' : '-noprompt', + 'matlab' : '-nosplash -minimize -wait', + 'perl' : '', + 'python' : '', + 'r' : '--no-save', + 'sas' : '-nosplash', + 'st' : '', + 'stata' : '/e'} +} + +default_executables = { + 'posix': + {'git-lfs' : 'git-lfs', + 'jupyter' : 'python -m jupyter', + 'lyx' : 'lyx', + 'latex' : 'pdflatex', + 'math' : 'math', + 'matlab' : 'matlab', + 'perl' : 'perl', + 'python' : 'python', + 'r' : 'Rscript', + 'sas' : 'sas', + 'st' : 'st', + 'stata' : 'stata-mp'}, + 'nt': + {'git-lfs' : 'git-lfs', + 'jupyter' : 'python -m jupyter', + 'lyx' : 'LyX2.3', + 'latex' : 'pdflatex', + 'math' : 'math', + 'matlab' : 'matlab', + 'perl' : 'perl', + 'python' : 'python', + 'r' : 'Rscript', + 'sas' : 'sas', + 'st' : 'st', + 'stata' : 'StataMP-64'}, +} + +extensions = { + 'jupyter' : ['.ipynb', '.IPYNB'], + 'lyx' : ['.lyx', '.LYX'], + 'latex' : ['.tex', '.TEX'], + 'math' : ['.m', '.M'], + 'matlab' : ['.m', '.M'], + 'perl' : ['.pl', '.PL'], + 'python' : ['.py', '.PY'], + 'r' : ['.r', '.R'], + 'sas' : ['.sas', '.SAS'], + 'st' : ['.stc', '.STC', '.stcmd', '.STCMD'], + 'stata' : ['.do', '.DO'] +} \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/private/metadata.pyc b/17/replication_package/code/lib/gslab_make/private/metadata.pyc new file mode 100644 index 0000000000000000000000000000000000000000..6a8100ace44fb63a27b6c3810afd80979d04f2fb Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/private/metadata.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/private/movedirective.py b/17/replication_package/code/lib/gslab_make/private/movedirective.py new file mode 100644 index 0000000000000000000000000000000000000000..5ebf39543f0fe638e5cbbf19f10462587f5db9e3 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/private/movedirective.py @@ -0,0 +1,409 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import re +import sys +import glob +import subprocess +from itertools import chain + +if (sys.version_info < (3, 0)) and (os.name == 'nt'): + import gslab_make.private.subprocess_fix as subprocess_fix +else: + import subprocess as subprocess_fix + +import gslab_make.private.messages as messages +import gslab_make.private.metadata as metadata +from gslab_make.private.exceptionclasses import CritError +from gslab_make.private.utility import convert_to_list, norm_path, file_to_array, format_traceback, decode + + +class MoveDirective(object): + """ + Directive for creating symbolic link or copy of data. + + Note + ---- + Parse line of text containing linking/copying instructions and represent as directive. + + Note + ---- + Takes glob-style wildcards. + + Parameters + ---------- + file: str + File containing linking/copying instructions (used for error messaging). + raw_line : str + Raw text of line containing linking/copying instructions (used for error messaging). + line : str + Line of text containing linking/copying instructions. + move_dir : str + Directory to write symlink/copy. + osname : str, optional + Name of OS. Defaults to ``os.name``. + + Attributes + ---------- + source : list + List of sources parsed from line. + destination : list + List of destinations parsed from line. + move_list : list + List of (source, destination) mappings parsed from line. + """ + + def __init__(self, raw_line, file, line, move_dir, osname = os.name): + self.raw_line = raw_line + self.file = file + self.line = line + self.move_dir = move_dir + self.osname = osname + self.check_os() + self.get_paths() + self.check_paths() + self.get_move_list() + + def check_os(self): + """Check OS is either POSIX or NT. + + Returns + ------- + None + """ + + if self.osname not in ['posix', 'nt']: + raise CritError(messages.crit_error_unknown_system % self.osname) + + def get_paths(self): + """Parse sources and destinations from line. + + Returns + ------- + None + """ + + try: + self.line = self.line.split('|') + self.line = [l.strip() for l in self.line] + self.line = [l.strip('"\'') for l in self.line] + self.destination, self.source = self.line + except Exception: + error_message = messages.crit_error_bad_move % (self.raw_line, self.file) + error_message = error_message + format_traceback() + raise_from(CritError(error_message), None) + + self.source = norm_path(self.source) + self.destination = norm_path(os.path.join(self.move_dir, self.destination)) + + def check_paths(self): + """Check sources and destination exist and have same number of wildcards. + + Returns + ------- + None + """ + + if re.findall('\*', self.source) != re.findall('\*', self.destination): + raise SyntaxError(messages.syn_error_wildcard % (self.raw_line, self.file)) + + if re.search('\*', self.source): + if not glob.glob(self.source): + raise CritError(messages.crit_error_no_path_wildcard % self.source) + else: + if not os.path.exists(self.source): + raise CritError(messages.crit_error_no_path % self.source) + + def get_move_list(self): + """Interpret wildcards to get list of paths that meet criteria. + + Returns + ------- + None + """ + if re.search('\*', self.source): + self.source_list = glob.glob(self.source) + self.destination_list = [self.extract_wildcards(t) for t in self.source_list] + self.destination_list = [self.fill_in_wildcards(s) for s in self.destination_list] + else: + self.source_list = [self.source] + self.destination_list = [self.destination] + + self.move_list = list(zip(self.source_list, self.destination_list)) + + def extract_wildcards(self, f): + """Extract wildcard characters from source path. + + Notes + ----- + Suppose path ``foo.py`` and glob pattern ``*.py``. + The wildcard characters would therefore be ``foo``. + + Parameters + ---------- + f : str + Source path from which to extract wildcard characters. + + Returns + ------- + wildcards : iter + Iterator of extracted wildcard characters. + """ + + regex = re.escape(self.source) + regex = regex.split('\*') + regex = '(.*)'.join(regex) + + wildcards = re.findall(regex, f) # Returns list if single match, list of set if multiple matches + wildcards = [(w, ) if isinstance(w, string_types) else w for w in wildcards] + wildcards = chain(*wildcards) + + return(wildcards) + + def fill_in_wildcards(self, wildcards): + """Fill in wildcards for destination path. + + Notes + ----- + Use extracted wildcard characters from a source path to create + corresponding destination path. + + Parameters + ---------- + wildcards: iterator + Extracted wildcard characters (returned from :func:`.extract_wildcards`). + + Returns + ------- + f : str + Destination path + """ + + f = self.destination + for w in wildcards: + f = re.sub('\*', w, f, 1) + + return(f) + + def create_symlinks(self): + """Create symlinks. + + Returns + ------- + None + """ + + if self.osname == 'posix': + self.move_posix(movetype = 'symlink') + elif self.osname == 'nt': + self.move_nt(movetype = 'symlink') + + return(self.move_list) + + def create_copies(self): + """Create copies. + + Returns + ------- + None + """ + + if self.osname == 'posix': + self.move_posix(movetype = 'copy') + elif self.osname == 'nt': + self.move_nt(movetype = 'copy') + + return(self.move_list) + + def move_posix(self, movetype): + """Create symlinks/copies using POSIX shell command specified in metadata. + + Parameters + ---------- + movetype : str + Type of file movement. Takes either ``'copy'`` or ``'symlink'``. + + Returns + ------- + None + """ + + for source, destination in self.move_list: + if movetype == 'copy': + command = metadata.commands[self.osname]['makecopy'] % (source, destination) + elif movetype == 'symlink': + command = metadata.commands[self.osname]['makelink'] % (source, destination) + + process = subprocess_fix.Popen(command, + shell = True, + stdout = subprocess.PIPE, + stderr = subprocess.PIPE, + universal_newlines = True) + process.wait() + stdout, stderr = process.communicate() + + if process.returncode != 0: + error_message = messages.crit_error_move_command % command + error_message = error_message + format_traceback(stderr) + raise CritError(error_message) + + + def move_nt(self, movetype): + """Create symlinks/copies using NT shell command specified in metadata. + + Parameters + ---------- + movetype : str + Type of file movement. Takes either ``'copy'`` or ``'symlink'``. + + Returns + ------- + None + """ + for source, destination in self.move_list: + if os.path.isdir(source): + link_option = '/d' + copy_option = '' + elif os.path.isfile(source): + link_option = '' + copy_option = 'cmd /c echo F | ' + + if movetype == 'copy': + command = metadata.commands[self.osname]['makecopy'] % (copy_option, source, destination) + elif movetype == 'symlink': + command = metadata.commands[self.osname]['makelink'] % (link_option, destination, source) + + process = subprocess_fix.Popen(command, + shell = True, + stdout = subprocess.PIPE, + stderr = subprocess.PIPE, + universal_newlines = True) + process.wait() + stdout, stderr = process.communicate() + + if process.returncode != 0: + error_message = messages.crit_error_move_command % command + error_message = error_message + format_traceback(stderr) + raise CritError(error_message) + + +class MoveList(object): + """ + List of move directives. + + Notes + ----- + Parse files containing linking/copying instructions and represent as move directives. + + Parameters + ---------- + file_list : list + List of files from which to parse linking/copying instructions. + move_dir : str + Directory to write symlink/copy. + mapping_dict : dict, optional + Dictionary of path mappings used to parse linking/copying instructions. + Defaults to no mappings. + + Attributes + ---------- + move_directive_list : list + List of move directives. + """ + + def __init__(self, + file_list, + move_dir, + mapping_dict = {}): + + self.file_list = file_list + self.move_dir = move_dir + self.mapping_dict = mapping_dict + self.parse_file_list() + self.get_paths() + self.get_move_directive_list() + + def parse_file_list(self): + """Parse wildcards in list of files. + + Returns + ------- + None + """ + + if self.file_list: + self.file_list = convert_to_list(self.file_list, 'file') + self.file_list = [norm_path(file) for file in self.file_list] + + file_list_parsed = [f for file in self.file_list for f in glob.glob(file)] + if file_list_parsed: + self.file_list = file_list_parsed + else: + error_list = [decode(f) for f in self.file_list] + raise CritError(messages.crit_error_no_files % error_list) + + def get_paths(self): + """Normalize paths. + + Returns + ------- + None + """ + + self.move_dir = norm_path(self.move_dir) + self.file_list = [norm_path(f) for f in self.file_list] + + def get_move_directive_list(self): + """Parse list of files to create symlink directives. + + Returns + ------- + None + """ + lines = [] + for file in self.file_list: + for raw_line in file_to_array(file): + try: + line = raw_line.format(**self.mapping_dict) + lines.append((file, raw_line, line)) + except KeyError as e: + key = decode(e).lstrip("u'").rstrip("'") + error_message = messages.crit_error_path_mapping % (key, key, file, raw_line, key) + error_message = error_message + format_traceback() + raise_from(CritError(error_message), None) + + self.move_directive_list = [MoveDirective(file, raw_line, line, self.move_dir) for (file, raw_line, line) in lines] + + def create_symlinks(self): + """Create symlinks according to directives. + + Returns + ------- + move_map : list + List of (source, destination) for each symlink created. + """ + + move_map = [] + for move in self.move_directive_list: + move_map.extend(move.create_symlinks()) + + return(move_map) + + def create_copies(self): + """Create copies according to directives. + + Returns + ------- + move_map : list + List of (source, destination) for each copy created. + """ + + move_map = [] + for move in self.move_directive_list: + move_map.extend(move.create_copies()) + + return(move_map) \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/private/movedirective.pyc b/17/replication_package/code/lib/gslab_make/private/movedirective.pyc new file mode 100644 index 0000000000000000000000000000000000000000..1a63da619efb264e0870696eea638184a02b011e Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/private/movedirective.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/private/programdirective.py b/17/replication_package/code/lib/gslab_make/private/programdirective.py new file mode 100644 index 0000000000000000000000000000000000000000..f4f82661f7973dc9f99496a593298ee12fb14602 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/private/programdirective.py @@ -0,0 +1,364 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import io +import sys +import shutil +import subprocess + +if (sys.version_info < (3, 0)) and (os.name == 'nt'): + import gslab_make.private.subprocess_fix as subprocess_fix +else: + import subprocess as subprocess_fix + +from termcolor import colored +import colorama +colorama.init() + +import gslab_make.private.messages as messages +import gslab_make.private.metadata as metadata +from gslab_make.private.exceptionclasses import CritError +from gslab_make.private.utility import norm_path, format_list, format_traceback, decode + + +class Directive(object): + """ + Directive. + + Note + ---- + Contains instructions on how to run shell commands. + + Parameters + ---------- + makelog : str + Path of make log. + log : str, optional + Path of directive log. Directive log is only written if specified. + Defaults to ``''`` (i.e., not written). + osname : str, optional + Name of OS. Defaults to ``os.name``. + shell : bool, optional + See `here `_. + Defaults to ``True``. + + Returns + ------- + None + """ + + def __init__(self, + makelog, + log = '', + osname = os.name, + shell = True): + + self.makelog = makelog + self.log = log + self.osname = osname + self.shell = shell + self.check_os() + self.get_paths() + + def check_os(self): + """Check OS is either POSIX or NT. + + Returns + ------- + None + """ + + if self.osname not in ['posix', 'nt']: + raise CritError(messages.crit_error_unknown_system % self.osname) + + def get_paths(self): + """Normalize paths. + + Returns + ------- + None + """ + + self.makelog = norm_path(self.makelog) + self.log = norm_path(self.log) + + def execute_command(self, command): + """Execute shell command. + + Parameters + ---------- + command : str + Shell command to execute. + + Returns + ------- + exit : tuple + Tuple (exit code, error message) for shell command. + """ + + self.output = 'Executing command: `%s`' % command + print(colored(self.output, metadata.color_in_process)) + + try: + if not self.shell: + command = command.split() + + process = subprocess_fix.Popen(command, + stdout = subprocess.PIPE, + stderr = subprocess.PIPE, + shell = self.shell, + universal_newlines = True) + stdout, stderr = process.communicate() + exit = (process.returncode, stderr) + + if stdout: + self.output += '\n' + decode(stdout) + if stderr: + self.output += '\n' + decode(stderr) + pass + + return(exit) + except: + error_message = messages.crit_error_bad_command % command + error_message = error_message + format_traceback() + raise_from(CritError(error_message), None) + + def write_log(self): + """Write logs for shell command. + + Returns + ------- + None + """ + + if self.makelog: + if not (metadata.makelog_started and os.path.isfile(self.makelog)): + raise CritError(messages.crit_error_no_makelog % self.makelog) + with io.open(self.makelog, 'a', encoding = 'utf-8', errors = 'ignore') as f: + print(self.output, file = f) + + if self.log: + with io.open(self.log, 'w', encoding = 'utf-8', errors = 'ignore') as f: + f.write(self.output) + + +class ProgramDirective(Directive): + """ + Program directive. + + Notes + ----- + Contains instructions on how to run a program through shell command. + + Parameters + ---------- + See :class:`.Directive`. + + application : str + Name of application to run program. + program : str + Path of program to run. + executable : str, optional + Executable to use for shell command. Defaults to executable specified in metadata. + option : str, optional + Options for shell command. Defaults to options specified in metadata. + args : str, optional + Arguments for shell command. Defaults to no arguments. + + Attributes + ---------- + program_dir : str + Directory of program parsed from program. + program_base : str + ``program_name.program_ext`` of program parsed from program. + program_name : str + Name of program parsed from program. + program_ext : str + Extension of program parsed from program. + + Returns + ------- + None + """ + + def __init__(self, + application, + program, + executable = '', + option = '', + args = '', + **kwargs): + + self.application = application + self.program = program + self.executable = executable + self.option = option + self.args = args + super(ProgramDirective, self).__init__(**kwargs) + self.parse_program() + self.check_program() + self.get_executable() + self.get_option() + + def parse_program(self): + """Parse program for directory, name, and extension. + + Returns + ------- + None + """ + + self.program = norm_path(self.program) + self.program_dir = os.path.dirname(self.program) + self.program_base = os.path.basename(self.program) + self.program_name, self.program_ext = os.path.splitext(self.program_base) + + def check_program(self): + """Check program exists and has correct extension given application. + + Returns + ------- + None + """ + + if not os.path.isfile(self.program): + raise CritError(messages.crit_error_no_file % self.program) + + if self.program_ext not in metadata.extensions[self.application]: + extensions = format_list(metadata.extensions[self.application]) + raise CritError(messages.crit_error_extension % (self.program, extensions)) + + def get_executable(self): + """Set executable to default from metadata if unspecified. + + Returns + ------- + None + """ + + if not self.executable: + self.executable = metadata.default_executables[self.osname][self.application] + + def get_option(self): + """Set options to default from metadata if unspecified. + + Returns + ------- + None + """ + + if not self.option: + self.option = metadata.default_options[self.osname][self.application] + + def move_program_output(self, program_output, log_file = ''): + """Move program outputs. + + Notes + ----- + Certain applications create program outputs that need to be moved to + appropriate logging files. + + Parameters + ---------- + program_output : str + Path of program output. + log_file : str, optional + Path of log file. Log file is only written if specified. + Defaults to ``''`` (i.e., not written). + """ + + program_output = norm_path(program_output) + + try: + with io.open(program_output, 'r', encoding = 'utf-8', errors = 'ignore') as f: + out = f.read() + except: + error_message = messages.crit_error_no_program_output % (program_output, self.program) + error_message = error_message + format_traceback() + raise_from(CritError(error_message), None) + + if self.makelog: + if not (metadata.makelog_started and os.path.isfile(self.makelog)): + raise CritError(messages.crit_error_no_makelog % self.makelog) + with io.open(self.makelog, 'a', encoding = 'utf-8', errors = 'ignore') as f: + print(out, file = f) + + if log_file: + if program_output != log_file: + shutil.copy2(program_output, log_file) + os.remove(program_output) + else: + os.remove(program_output) + + return(out) + + +class SASDirective(ProgramDirective): + """ + SAS directive. + + Notes + ----- + Contains instructions on how to run a SAS program through shell command. + + Parameters + ---------- + See :class:`.ProgramDirective`. + + lst : str, optional + Path of directive lst. Directive lst is only written if specified. + Defaults to ``''`` (i.e., not written). + """ + def __init__(self, + lst = '', + **kwargs): + + self.lst = lst + super(SASDirective, self).__init__(**kwargs) + + +class LyXDirective(ProgramDirective): + """ + LyX directive. + + Notes + ----- + Contains instructions on how to run a LyX program through shell command. + + Parameters + ---------- + See :class:`.ProgramDirective`. + + output_dir : str + Directory to write PDFs. + doctype : str, optional + Type of LyX document. Takes either ``'handout'`` and ``'comments'``. + All other strings will default to standard document type. + Defaults to ``''`` (i.e., standard document type). + """ + + def __init__(self, + output_dir, + doctype = '', + **kwargs): + + self.output_dir = output_dir + self.doctype = doctype + super(LyXDirective, self).__init__(**kwargs) + self.check_doctype() + + def check_doctype(self): + """Check document type is valid. + + Returns + ------- + None + """ + + if self.doctype not in ['handout', 'comments', '']: + print(colored(messages.warning_lyx_type % self.doctype, 'red')) + self.doctype = '' diff --git a/17/replication_package/code/lib/gslab_make/private/programdirective.pyc b/17/replication_package/code/lib/gslab_make/private/programdirective.pyc new file mode 100644 index 0000000000000000000000000000000000000000..add4c54168c27d8688759101df22be5db647d73e Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/private/programdirective.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/private/subprocess_fix.py b/17/replication_package/code/lib/gslab_make/private/subprocess_fix.py new file mode 100644 index 0000000000000000000000000000000000000000..cdbd44e055dca025c6c0a004652eb1208430773b --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/private/subprocess_fix.py @@ -0,0 +1,156 @@ +## From https://gist.github.com/vaab/2ad7051fc193167f15f85ef573e54eb9 + +## issue: https://bugs.python.org/issue19264 + +import os +import ctypes +import subprocess +import _subprocess +from ctypes import byref, windll, c_char_p, c_wchar_p, c_void_p, \ + Structure, sizeof, c_wchar, WinError +from ctypes.wintypes import BYTE, WORD, LPWSTR, BOOL, DWORD, LPVOID, \ + HANDLE + + +## +## Types +## + +CREATE_UNICODE_ENVIRONMENT = 0x00000400 +LPCTSTR = c_char_p +LPTSTR = c_wchar_p +LPSECURITY_ATTRIBUTES = c_void_p +LPBYTE = ctypes.POINTER(BYTE) + +class STARTUPINFOW(Structure): + _fields_ = [ + ("cb", DWORD), ("lpReserved", LPWSTR), + ("lpDesktop", LPWSTR), ("lpTitle", LPWSTR), + ("dwX", DWORD), ("dwY", DWORD), + ("dwXSize", DWORD), ("dwYSize", DWORD), + ("dwXCountChars", DWORD), ("dwYCountChars", DWORD), + ("dwFillAtrribute", DWORD), ("dwFlags", DWORD), + ("wShowWindow", WORD), ("cbReserved2", WORD), + ("lpReserved2", LPBYTE), ("hStdInput", HANDLE), + ("hStdOutput", HANDLE), ("hStdError", HANDLE), + ] + +LPSTARTUPINFOW = ctypes.POINTER(STARTUPINFOW) + + +class PROCESS_INFORMATION(Structure): + _fields_ = [ + ("hProcess", HANDLE), ("hThread", HANDLE), + ("dwProcessId", DWORD), ("dwThreadId", DWORD), + ] + +LPPROCESS_INFORMATION = ctypes.POINTER(PROCESS_INFORMATION) + + +class DUMMY_HANDLE(ctypes.c_void_p): + + def __init__(self, *a, **kw): + super(DUMMY_HANDLE, self).__init__(*a, **kw) + self.closed = False + + def Close(self): + if not self.closed: + windll.kernel32.CloseHandle(self) + self.closed = True + + def __int__(self): + return self.value + + +CreateProcessW = windll.kernel32.CreateProcessW +CreateProcessW.argtypes = [ + LPCTSTR, LPTSTR, LPSECURITY_ATTRIBUTES, + LPSECURITY_ATTRIBUTES, BOOL, DWORD, LPVOID, LPCTSTR, + LPSTARTUPINFOW, LPPROCESS_INFORMATION, +] +CreateProcessW.restype = BOOL + + +## +## Patched functions/classes +## + +def CreateProcess(executable, args, _p_attr, _t_attr, + inherit_handles, creation_flags, env, cwd, + startup_info): + """Create a process supporting unicode executable and args for win32 + + Python implementation of CreateProcess using CreateProcessW for Win32 + + """ + + si = STARTUPINFOW( + dwFlags=startup_info.dwFlags, + wShowWindow=startup_info.wShowWindow, + cb=sizeof(STARTUPINFOW), + ## XXXvlab: not sure of the casting here to ints. + hStdInput=int(startup_info.hStdInput) if startup_info.hStdInput else None, + hStdOutput=int(startup_info.hStdOutput) if startup_info.hStdOutput else None, + hStdError=int(startup_info.hStdError) if startup_info.hStdError else None, + ) + + wenv = None + if env is not None: + ## LPCWSTR seems to be c_wchar_p, so let's say CWSTR is c_wchar + env = (unicode("").join([ + unicode("%s=%s\0") % (k, v) + for k, v in env.items()])) + unicode("\0") + wenv = (c_wchar * len(env))() + wenv.value = env + + pi = PROCESS_INFORMATION() + creation_flags |= CREATE_UNICODE_ENVIRONMENT + + if CreateProcessW(executable, args, None, None, + inherit_handles, creation_flags, + wenv, cwd, byref(si), byref(pi)): + return (DUMMY_HANDLE(pi.hProcess), DUMMY_HANDLE(pi.hThread), + pi.dwProcessId, pi.dwThreadId) + raise WinError() + + +class Popen(subprocess.Popen): + """This superseeds Popen and corrects a bug in cPython 2.7 implem""" + + def _execute_child(self, args, executable, preexec_fn, close_fds, + cwd, env, universal_newlines, + startupinfo, creationflags, shell, to_close, + p2cread, p2cwrite, + c2pread, c2pwrite, + errread, errwrite): + """Code from part of _execute_child from Python 2.7 (9fbb65e) + + There are only 2 little changes concerning the construction of + the the final string in shell mode: we preempt the creation of + the command string when shell is True, because original function + will try to encode unicode args which we want to avoid to be able to + sending it as-is to ``CreateProcess``. + + """ + if not isinstance(args, subprocess.types.StringTypes): + args = subprocess.list2cmdline(args) + + if startupinfo is None: + startupinfo = subprocess.STARTUPINFO() + if shell: + startupinfo.dwFlags |= _subprocess.STARTF_USESHOWWINDOW + startupinfo.wShowWindow = _subprocess.SW_HIDE + comspec = os.environ.get("COMSPEC", unicode("cmd.exe")) + args = unicode('{} /c "{}"').format(comspec, args) + if (_subprocess.GetVersion() >= 0x80000000 or + os.path.basename(comspec).lower() == "command.com"): + w9xpopen = self._find_w9xpopen() + args = unicode('"%s" %s') % (w9xpopen, args) + creationflags |= _subprocess.CREATE_NEW_CONSOLE + + super(Popen, self)._execute_child(args, executable, + preexec_fn, close_fds, cwd, env, universal_newlines, + startupinfo, creationflags, False, to_close, p2cread, + p2cwrite, c2pread, c2pwrite, errread, errwrite) + +_subprocess.CreateProcess = CreateProcess diff --git a/17/replication_package/code/lib/gslab_make/private/utility.py b/17/replication_package/code/lib/gslab_make/private/utility.py new file mode 100644 index 0000000000000000000000000000000000000000..b3d1b87629a1412d0bc6ce65425273de204cffb3 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/private/utility.py @@ -0,0 +1,308 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import re +import io +import sys +import glob +import yaml +import codecs +import filecmp +import traceback + +import gslab_make.private.messages as messages +from gslab_make.private.exceptionclasses import CritError + + +def decode(string): + """Decode string.""" + + if (sys.version_info < (3, 0)) and isinstance(string, string_types): + string = codecs.decode(string, 'latin1') + + return(string) + + +def encode(string): + """Clean string for encoding.""" + + if (sys.version_info < (3, 0)) and isinstance(string, unicode): + string = codecs.encode(string, 'utf-8') + + return(string) + + +def convert_to_list(obj, warning_type): + """Convert object to list.""" + + obj = [obj] if isinstance(obj, string_types) else obj + + if type(obj) is not list: + if (warning_type == 'dir'): + raise_from(TypeError(messages.type_error_dir_list % obj), None) + elif (warning_type == 'file'): + raise_from(TypeError(messages.type_error_file_list % obj), None) + + return(obj) + + +def norm_path(path): + """Normalize path to be OS-compatible.""" + + if path: + path = re.split('[/\\\\]+', path) + path = os.path.sep.join(path) + path = os.path.expanduser(path) + path = os.path.abspath(path) + + return(path) + + +def get_path(paths_dict, key, throw_error = True): + """Get path for key. + + Parameters + ---------- + path_dict : dict + Dictionary of paths. + key : str + Path to get from dictionary. + throw_error : bool + Return error instead of ``None``. Defaults to ``True``. + + Returns + ------- + path : str + Path requested. + """ + + try: + path = paths_dict[key] + if isinstance(path, string_types): + path = norm_path(path) + elif isinstance(path, list): + path = [norm_path(p) for p in path] + except KeyError: + if throw_error: + raise_from(CritError(messages.crit_error_no_key % (key, key)), None) + else: + path = None + + return(path) + + +def glob_recursive(path, depth, max_depth = 20, quiet = True): + """Walks through path. + + Notes + ----- + Takes glob-style wildcards. + + Parameters + ---------- + path : str + Path to walk through. + depth : int + Level of depth when walking through path. + max_depth : int + Maximum level of depth allowed. Defaults to 20. + quiet : bool, optional + Suppress warning if no files globbed. Defaults to ``True``. + + Returns + ------- + path_files : list + List of files contained in path. + """ + + depth = max_depth if depth > max_depth else depth + path_walk = norm_path(path) + path_files = glob.glob(path_walk) + + i = 0 + while i <= depth: + path_walk = os.path.join(path_walk, "*") + glob_files = glob.glob(path_walk) + if glob_files: + path_files.extend(glob_files) + i += 1 + else: + break + + path_files = [p for p in path_files if os.path.isfile(p)] + if not path_files and not quiet: + print(messages.warning_glob % (path, depth)) + + return(path_files) + + +def file_to_array(file_name): + """Read file and extract lines to list. + + Parameters + ---------- + file_name : str + Path of file to read. + + Returns + ------- + array : list + List of lines contained in file. + """ + + with io.open(file_name, encoding = 'utf-8') as f: + array = [line.strip() for line in f] + array = [line for line in array if line] + array = [line for line in array if not re.match('\#',line)] + + return(array) + + +def format_traceback(trace = ''): + """Format traceback message. + + Parameters + ---------- + trace : str + Traceback to format. Defaults to ``traceback.format_exc``. + + Notes + ----- + Format trackback for readability to pass into user messages. + + Returns + ------- + formatted : str + Formatted traceback. + """ + + if not trace: + trace = traceback.format_exc() + + trace = trace.strip() + trace = '\n' + decode(trace) + formatted = re.sub('\n', '\n > ', trace) + + return(formatted) + + +def format_message(message): + """Format message.""" + + message = message.strip() + star_line = '*' * (len(message) + 4) + formatted = star_line + '\n* %s *\n' + star_line + formatted = formatted % message + + return(formatted) + + +def format_list(list): + """Format list. + + Parameters + ---------- + list : list + List to format. + + Notes + ----- + Format list for readability to pass into user messages. + + Returns + ------- + formatted : str + Formatted list. + """ + + formatted = ['`' + str(item) + '`' for item in list] + formatted = ", ".join(formatted) + + return(formatted) + + +def open_yaml(path): + """Safely loads YAML file.""" + + path = norm_path(path) + + with io.open(path, 'r') as f: + stream = yaml.safe_load(f) + + return(stream) + + +# ~~~~~~~~~~ # +# DEPRECATED # +# ~~~~~~~~~~ # + +def check_duplicate(original, copy): + """Check duplicate. + + Parameters + ---------- + original : str + Original path. + copy : str + Path to check if duplicate. + + Returns + ------- + duplicate : bool + Destination is duplicate. + """ + + duplicate = os.path.exists(copy) + + if duplicate: + if os.path.isfile(original): + duplicate = filecmp.cmp(original, copy) + elif os.path.isdir(copy): + dircmp = filecmp.dircmp(original, copy, ignore = ['.DS_Store']) + duplicate = parse_dircmp(dircmp) + else: + duplicate = False + + return(duplicate) + + +def parse_dircmp(dircmp): + """Parse dircmp to see if directories duplicate. + + Parameters + ---------- + dircmp : ``filecmp.dircmp`` + dircmp to parse if directories duplicate. + + Returns + ------- + duplicate : bool + Directories are duplicates. + """ + + # Check directory + if dircmp.left_only: + return False + if dircmp.right_only: + return False + if dircmp.diff_files: + return False + if dircmp.funny_files: + return False + if dircmp.common_funny: + return False + + # Check subdirectories + duplicate = True + + for subdir in dircmp.subdirs.itervalues(): + if duplicate: + duplicate = check_duplicate(subdir) + else: + break + + return(duplicate) \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/private/utility.pyc b/17/replication_package/code/lib/gslab_make/private/utility.pyc new file mode 100644 index 0000000000000000000000000000000000000000..4d22bed206e686d2843e8142e232515d2d22bf6e Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/private/utility.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/run_program.py b/17/replication_package/code/lib/gslab_make/run_program.py new file mode 100644 index 0000000000000000000000000000000000000000..72a21a0a68e6d7da6d1afbdde0529e8d46602001 --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/run_program.py @@ -0,0 +1,1087 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import os +import re +import sys +import shutil +import traceback +import fileinput + +import nbformat +from nbconvert.preprocessors import ExecutePreprocessor + +from termcolor import colored +import colorama +colorama.init() + +import gslab_make.private.messages as messages +import gslab_make.private.metadata as metadata +from gslab_make.private.exceptionclasses import CritError, ColoredError, ProgramError +from gslab_make.private.programdirective import Directive, ProgramDirective, SASDirective, LyXDirective +from gslab_make.private.utility import get_path, format_message, norm_path +from gslab_make.write_logs import write_to_makelog + + +def run_jupyter(paths, program, timeout = None, kernel_name = ''): + """.. Run Jupyter notebook using system command. + + Runs notebook ``program`` using Python API, with notebook specified + in the form of ``notebook.ipynb``. + Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + timeout : int, optional + Time to wait (in seconds) to finish executing a cell before raising exception. + Defaults to no timeout. + kernel_name : str, optional + Name of kernel to use for execution + (e.g., ``python2`` for standard Python 2 kernel, ``python3`` for standard Python 3 kernel). + Defaults to ``''`` (i.e., kernel specified in notebook). + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_jupyter(paths, program = 'notebook.ipynb') + """ + + try: + program = norm_path(program) + + with open(program) as f: + message = 'Processing notebook: `%s`' % program + write_to_makelog(paths, message) + print(colored(message, 'cyan')) + + if not kernel_name: + kernel_name = 'python%s' % sys.version_info[0] + ep = ExecutePreprocessor(timeout = timeout, kernel_name = kernel_name) + nb = nbformat.read(f, as_version = 4) + ep.preprocess(nb, {'metadata': {'path': '.'}}) + + with open(program, 'wt') as f: + nbformat.write(nb, f) + except: + error_message = 'Error with `run_jupyter`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_lyx(paths, program, doctype = '', **kwargs): + """.. Run LyX script using system command. + + Compiles document ``program`` using system command, with document specified + in the form of ``script.lyx``. Status messages are appended to file ``makelog``. + PDF outputs are written in directory ``output_dir``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + doctype : str, optional + Type of LyX document. Takes either ``'handout'`` and ``'comments'``. + All other strings will default to standard document type. + Defaults to ``''`` (i.e., standard document type). + + Path Keys + --------- + makelog : str + Path of makelog. + output_dir : str + Directory to write PDFs. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_lyx(paths, program = 'script.lyx') + """ + + try: + makelog = get_path(paths, 'makelog') + output_dir = get_path(paths, 'output_dir') + direct = LyXDirective(output_dir = output_dir, + doctype = doctype, + application = 'lyx', + program = program, + makelog = makelog, + **kwargs) + + # Make handout/comments LyX file + if direct.doctype: + temp_name = os.path.join(direct.program_name + '_' + direct.doctype) + temp_program = os.path.join(direct.program_dir, temp_name + '.lyx') + + beamer = False + shutil.copy2(direct.program, temp_program) + + for line in fileinput.input(temp_program, inplace = True, backup = '.bak'): + if r'\textclass beamer' in line: + beamer = True + if direct.doctype == 'handout' and beamer and (r'\options' in line): + line = line.rstrip('\n') + ', handout\n' + elif direct.doctype == 'comments' and (r'\begin_inset Note Note' in line): + line = line.replace('Note Note', 'Note Greyedout') + + print(line) + else: + temp_name = direct.program_name + temp_program = direct.program + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.option, temp_program) + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'LyX program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + + # Move PDF output + temp_pdf = os.path.join(direct.program_dir, temp_name + '.pdf') + output_pdf = os.path.join(direct.output_dir, direct.program_name + '.pdf') + + if temp_pdf != output_pdf: + shutil.copy2(temp_pdf, output_pdf) + os.remove(temp_pdf) + + # Remove handout/comments LyX file + if direct.doctype: + os.remove(temp_program) + except ProgramError: + raise + except: + error_message = 'Error with `run_lyx`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_latex(paths, program, **kwargs): + """.. Run LaTeX script using system command. + + Compiles document ``program`` using system command, with document specified + in the form of ``script.tex``. Status messages are appended to file ``makelog``. + PDF outputs are written in directory ``output_dir``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + output_dir : str + Directory to write PDFs. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Note + ---- + This function creates and removes a directory named ``latex_auxiliary_dir``. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_latex(paths, program = 'script.tex') + """ + + try: + makelog = get_path(paths, 'makelog') + output_dir = get_path(paths, 'output_dir') + direct = LyXDirective(output_dir = output_dir, + application = 'latex', + program = program, + makelog = makelog, + **kwargs) + + temp_name = direct.program_name + temp_program = direct.program + + # Generate folder for auxiliary files + os.mkdir('latex_auxiliary_dir') + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.option, temp_program) + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'LaTeX program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + + # Move PDF output + temp_pdf = os.path.join('latex_auxiliary_dir', temp_name + '.pdf') + output_pdf = os.path.join(direct.output_dir, direct.program_name + '.pdf') + + if temp_pdf != output_pdf: + shutil.copy2(temp_pdf, output_pdf) + shutil.rmtree('latex_auxiliary_dir') + + # Remove auxiliary files + except ProgramError: + raise + except: + error_message = 'Error with `run_latex`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_mathematica(paths, program, **kwargs): + """.. Run Mathematica script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.m``. Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_mathematica(paths, program = 'script.m') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = ProgramDirective(application = 'math', + program = program, + makelog = makelog, + **kwargs) + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.program, direct.option) + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'Mathematica program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + except ProgramError: + raise + except: + error_message = 'Error with `run_mathematica`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_matlab(paths, program, **kwargs): + """.. Run Matlab script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.m``. Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_matlab(paths, program = 'script.m') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = ProgramDirective(application = 'matlab', + program = program, + makelog = makelog, + **kwargs) + + # Get program output + program_log = os.path.join(os.getcwd(), direct.program_name + '.log') + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.option, direct.program, direct.program_name + '.log') + exit_code, stderr = direct.execute_command(command) + if exit_code != 0: + error_message = 'Matlab program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + direct.move_program_output(program_log, direct.log) + except ProgramError: + raise + except: + error_message = 'Error with `run_matlab`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_perl(paths, program, **kwargs): + """.. Run Perl script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.pl``. Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Arguments for system command. Defaults to no arguments. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_perl(paths, program = 'script.pl') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = ProgramDirective(application = 'perl', + program = program, + makelog = makelog, + **kwargs) + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.option, direct.program, direct.args) + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'Perl program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + except ProgramError: + raise + except: + error_message = 'Error with `run_perl`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_python(paths, program, **kwargs): + """.. Run Python script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.py``. Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Arguments for system command. Defaults to no arguments. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_python(paths, program = 'script.py') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = ProgramDirective(application = 'python', + program = program, + makelog = makelog, + **kwargs) + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.option, direct.program, direct.args) + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'Python program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + except ProgramError: + raise + except: + error_message = 'Error with `run_python`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_r(paths, program, **kwargs): + """.. Run R script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.R``. Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_r(paths, program = 'script.R') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = ProgramDirective(application = 'r', + program = program, + makelog = makelog, + **kwargs) + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.option, direct.program) + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'R program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + except ProgramError: + raise + except: + error_message = 'Error with `run_r`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_sas(paths, program, lst = '', **kwargs): + """.. Run SAS script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.sas``. Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + lst : str, optional + Path of program lst. Program lst is only written if specified. + Defaults to ``''`` (i.e., not written). + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_sas(paths, program = 'script.sas') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = SASDirective(application = 'sas', + program = program, + makelog = makelog, + **kwargs) + + # Get program outputs + program_log = os.path.join(os.getcwd(), direct.program_name + '.log') + program_lst = os.path.join(os.getcwd(), direct.program_name + '.lst') + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.option, direct.program) + exit_code, stderr = direct.execute_command(command) + if exit_code != 0: + error_message = 'SAS program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + direct.move_program_output(program_log) + direct.move_program_output(program_lst) + except ProgramError: + raise + except: + error_message = 'Error with `run_sas`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_stat_transfer(paths, program, **kwargs): + """.. Run StatTransfer script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.stc`` or ``script.stcmd``. + Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_stat_transfer(paths, program = 'script.stc') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = ProgramDirective(application = 'st', + program = program, + makelog = makelog, + **kwargs) + + # Execute + command = metadata.commands[direct.osname][direct.application] % (direct.executable, direct.program) + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'StatTransfer program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + except ProgramError: + raise + except: + error_message = 'Error with `run_stat_transfer`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_stata(paths, program, **kwargs): + """.. Run Stata script using system command. + + Runs script ``program`` using system command, with script specified + in the form of ``script.do``. Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + program : str + Path of script to run. + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Note + ---- + When a do-file contains a space in its name, different version of Stata save the + corresponding log file with different names. Some versions of Stata truncate the + name to everything before the first space of the do-file name. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of program log. Program log is only written if specified. + Defaults to ``''`` (i.e., not written). + executable : str, optional + Executable to use for system command. + Defaults to executable specified in :ref:`default settings`. + option : str, optional + Options for system command. Defaults to options specified in :ref:`default settings`. + args : str, optional + Not applicable. + + Returns + ------- + None + + Example + ------- + .. code-block:: python + + run_stata(paths, program = 'script.do') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = ProgramDirective(application = 'stata', + program = program, + makelog = makelog, + **kwargs) + + # Get program output (partial) + program_name = direct.program.split(" ")[0] + program_name = os.path.split(program_name)[-1] + program_name = os.path.splitext(program_name)[0] + program_log_partial = os.path.join(os.getcwd(), program_name + '.log') + + # Get program output (full) + program_log_full = os.path.join(os.getcwd(), direct.program_name + '.log') + + # Sanitize program + if direct.osname == "posix": + direct.program = re.escape(direct.program) + + # Execute + command = metadata.commands[direct.osname]['stata'] % (direct.executable, direct.option, direct.program) + exit_code, stderr = direct.execute_command(command) + if exit_code != 0: + error_message = 'Stata program executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + try: + output = direct.move_program_output(program_log_partial, direct.log) + except: + output = direct.move_program_output(program_log_full, direct.log) + _check_stata_output(output) + except ProgramError: + raise + except: + error_message = 'Error with `run_stata`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def _check_stata_output(output): + """.. Check Stata output""" + + regex = "end of do-file[\s]*r\([0-9]*\);" + if re.search(regex, output): + error_message = 'Stata program executed with errors.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, 'See makelog for more detail.'), None) + + +def execute_command(paths, command, **kwargs): + """.. Run system command. + + Runs system command `command` with shell execution boolean ``shell``. + Outputs are appended to file ``makelog`` and written to system command log file ``log``. + Status messages are appended to file ``makelog``. + + Parameters + ---------- + paths : dict + Dictionary of paths. Dictionary should contain values for all keys listed below. + command : str + System command to run. + shell : `bool`, optional + See `here `_. + Defaults to ``True``. + log : str, optional + Path of system command log. System command log is only written if specified. + Defaults to ``''`` (i.e., not written). + + Path Keys + --------- + makelog : str + Path of makelog. + + Note + ---- + We recommend leaving all other parameters to their defaults. + + Other Parameters + ---------------- + osname : str, optional + Name of OS. Used to check if OS is supported. Defaults to ``os.name``. + + + Returns + ------- + None + + Example + ------- + The following code executes the ``ls`` command, + writes outputs to system command log file ``'file'``, + and appends outputs and/or status messages to ``paths['makelog']``. + + .. code-block:: python + + execute_command(paths, 'ls', log = 'file') + """ + + try: + makelog = get_path(paths, 'makelog') + direct = Directive(makelog = makelog, **kwargs) + + # Execute + exit_code, stderr = direct.execute_command(command) + direct.write_log() + if exit_code != 0: + error_message = 'Command executed with errors. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ProgramError(error_message, stderr), None) + except ProgramError: + raise + except: + error_message = 'Error with `execute_command`. Traceback can be found below.' + error_message = format_message(error_message) + write_to_makelog(paths, error_message + '\n\n' + traceback.format_exc()) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +def run_module(root, module, build_script = 'make.py', osname = None): + """.. Run module. + + Runs script `build_script` in module directory `module` relative to root of repository `root`. + + Parameters + ---------- + root : str + Directory of root. + module: str + Name of module. + build_script : str + Name of build script. Defaults to ``make.py``. + osname : str, optional + Name of OS. Used to determine syntax of system command. Defaults to ``os.name``. + + Returns + ------- + None + + Example + ------- + The following code runs the script ``root/module/make.py``. + + .. code-block:: python + + run_module(root = 'root', module = 'module') + """ + + osname = osname if osname else os.name # https://github.com/sphinx-doc/sphinx/issues/759 + + try: + module_dir = os.path.join(root, module) + os.chdir(module_dir) + + build_script = norm_path(build_script) + if not os.path.isfile(build_script): + raise CritError(messages.crit_error_no_file % build_script) + + message = 'Running module `%s`' % module + message = format_message(message) + message = colored(message, attrs = ['bold']) + print('\n' + message) + + status = os.system('%s %s' % (metadata.default_executables[osname]['python'], build_script)) + if status != 0: + raise ProgramError() + except ProgramError: + sys.exit() + except: + error_message = 'Error with `run_module`. Traceback can be found below.' + error_message = format_message(error_message) + raise_from(ColoredError(error_message, traceback.format_exc()), None) + + +__all__ = ['run_stata', 'run_matlab', 'run_perl', 'run_python', + 'run_jupyter', 'run_mathematica', 'run_stat_transfer', + 'run_lyx', 'run_latex', 'run_r', 'run_sas', + 'execute_command', 'run_module'] \ No newline at end of file diff --git a/17/replication_package/code/lib/gslab_make/run_program.pyc b/17/replication_package/code/lib/gslab_make/run_program.pyc new file mode 100644 index 0000000000000000000000000000000000000000..2608adf03eaad19df849c83228d7eb41323fb5d6 Binary files /dev/null and b/17/replication_package/code/lib/gslab_make/run_program.pyc differ diff --git a/17/replication_package/code/lib/gslab_make/tablefill.py b/17/replication_package/code/lib/gslab_make/tablefill.py new file mode 100644 index 0000000000000000000000000000000000000000..b64232dbbd056f8b131570e9543434f4d324a80b --- /dev/null +++ b/17/replication_package/code/lib/gslab_make/tablefill.py @@ -0,0 +1,641 @@ +# -*- coding: utf-8 -*- +from __future__ import absolute_import, division, print_function, unicode_literals +from future.utils import raise_from, string_types +from builtins import (bytes, str, open, super, range, + zip, round, input, int, pow, object) + +import io +import re +import traceback +from itertools import chain + +import gslab_make.private.messages as messages +from gslab_make.private.exceptionclasses import CritError, ColoredError +from gslab_make.private.utility import convert_to_list, norm_path, format_message + + +def _parse_tag(tag): + """.. Parse tag from input.""" + + if not re.match('\n', tag, flags = re.IGNORECASE): + raise Exception + else: + tag = re.sub('\n', r'\g<1>', tag, flags = re.IGNORECASE) + tag = tag.lower() + + return(tag) + + +def _parse_data(data, null): + """.. Parse data from input. + + Parameters + ---------- + data : list + Input data to parse. + null : str + String to replace null characters. + + Returns + ------- + data : list + List of data values from input. + """ + null_strings = ['', '.', 'NA'] + + data = [row.rstrip('\r\n') for row in data] + data = [row for row in data if row] + data = [row.split('\t') for row in data] + data = chain(*data) + data = list(data) + data = [null if value in null_strings else value for value in data] + + return(data) + + +def _parse_content(file, null): + """.. Parse content from input.""" + + with io.open(file, 'r', encoding = 'utf-8') as f: + content = f.readlines() + try: + tag = _parse_tag(content[0]) + except: + raise_from(CritError(messages.crit_error_no_tag % file), None) + data = _parse_data(content[1:], null) + + return(tag, data) + + +def _insert_value(line, value, type, null): + """.. Insert value into line. + + Parameters + ---------- + line : str + Line of document to insert value. + value : str + Value to insert. + type : str + Formatting for value. + + Returns + ------- + line : str + Line of document with inserted value. + """ + + if (type == 'no change'): + line = re.sub('\\\\?#\\\\?#\\\\?#', value, line) + + elif (type == 'round'): + if value == null: + line = re.sub('(.*?)\\\\?#[0-9]+\\\\?#', r'\g<1>' + value, line) + else: + try: + value = float(value) + except: + raise_from(CritError(messages.crit_error_not_float % value), None) + digits = re.findall('\\\\?#([0-9]+)\\\\?#', line)[0] + rounded_value = format(value, '.%sf' % digits) + line = re.sub('(.*?)\\\\?#[0-9]+\\\\?#', r'\g<1>' + rounded_value, line) + + elif (type == 'comma + round'): + if value == null: + line = re.sub('(.*?)\\\\?#[0-9]+,\\\\?#', r'\g<1>' + value, line) + else: + try: + value = float(value) + except: + raise_from(CritError(messages.crit_error_not_float % value), None) + digits = re.findall('\\\\?#([0-9]+),\\\\?#', line)[0] + rounded_value = format(value, ',.%sf' % digits) + line = re.sub('(.*?)\\\\?#[0-9]+,\\\\?#', r'\g<1>' + rounded_value, line) + + return(line) + + +def _insert_tables_lyx(template, tables, null): + """.. Fill tables for LyX template. + + Parameters + ---------- + template : str + Path of LyX template to fill. + tables : dict + Dictionary ``{tag: values}`` of tables. + + Returns + ------- + template : str + Filled LyX template. + """ + + with io.open(template, 'r', encoding = 'utf-8') as f: + doc = f.readlines() + + is_table = False + + for i in range(len(doc)): + # Check if table + if not is_table and re.match('name "tab:', doc[i]): + tag = doc[i].replace('name "tab:','').rstrip('"\n').lower() + try: + values = tables[tag] + entry_count = 0 + is_table = True + except KeyError: + raise_from(CritError(messages.crit_error_no_input_table % tag), None) + + # Fill in values if table + if is_table: + try: + if re.match('.*###', doc[i]): + doc[i] = _insert_value(doc[i], values[entry_count], 'no change', null) + entry_count += 1 + elif re.match('.*#[0-9]+#', doc[i]): + doc[i] = _insert_value(doc[i], values[entry_count], 'round', null) + entry_count += 1 + elif re.match('.*#[0-9]+,#', doc[i]): + doc[i] = _insert_value(doc[i], values[entry_count], 'comma + round', null) + entry_count += 1 + elif re.match('', doc[i]): + is_table = False + if entry_count != len(values): + raise_from(CritError(messages.crit_error_too_many_values % tag), None) + except IndexError: + raise_from(CritError(messages.crit_error_not_enough_values % tag), None) + + doc = '\n'.join(doc) + + return(doc) + + +def _insert_tables_latex(template, tables, null): + """.. Fill tables for LaTeX template. + + Parameters + ---------- + template : str + Path of LaTeX template to fill. + tables : dict + Dictionary ``{tag: values}`` of tables. + + Returns + ------- + template : str + Filled LaTeX template. + """ + + with io.open(template, 'r', encoding = 'utf-8') as f: + doc = f.readlines() + + is_table = False + + for i in range(len(doc)): + # Check if table + if not is_table and re.search('label\{tab:', doc[i]): + tag = doc[i].split(':')[1].rstrip('}\n').strip('"').lower() + try: + values = tables[tag] + entry_count = 0 + is_table = True + except KeyError: + raise_from(CritError(messages.crit_error_no_input_table % tag), None) + + # Fill in values if table + if is_table: + try: + line = doc[i].split("&") + + for j in range(len(line)): + if re.search('.*\\\\#\\\\#\\\\#', line[j]): + line[j] = _insert_value(line[j], values[entry_count], 'no change', null) + entry_count += 1 + elif re.search('.*\\\\#[0-9]+\\\\#', line[j]): + line[j] = _insert_value(line[j], values[entry_count], 'round', null) + entry_count += 1 + elif re.search('.*\\\\#[0-9]+,\\\\#', line[j]): + line[j] = _insert_value(line[j], values[entry_count], 'comma + round', null) + entry_count += 1 + + doc[i] = "&".join(line) + + if re.search('end\{tabular\}', doc[i], flags = re.IGNORECASE): + is_table = False + if entry_count != len(values): + raise_from(CritError(messages.crit_error_too_many_values % tag), None) + except IndexError: + raise_from(CritError(messages.crit_error_not_enough_values % tag), None) + + doc = '\n'.join(doc) + + return(doc) + + +def _insert_tables(template, tables, null): + """.. Fill tables for template. + + Parameters + ---------- + template : str + Path of template to fill. + tables : dict + Dictionary ``{tag: values}`` of tables. + + Returns + ------- + template : str + Filled template. + """ + + template = norm_path(template) + + if re.search('\.lyx', template): + doc = _insert_tables_lyx(template, tables, null) + elif re.search('\.tex', template): + doc = _insert_tables_latex(template, tables, null) + + return(doc) + + +def tablefill(inputs, template, output, null = '.'): + """.. Fill tables for template using inputs. + + Fills tables in document ``template`` using files in list ``inputs``. + Writes filled document to file ``output``. + Null characters in ``inputs`` are replaced with value ``null``. + + Parameters + ---------- + inputs : list + Input or list of inputs to fill into template. + template : str + Path of template to fill. + output : str + Path of output. + null : str + Value to replace null characters (i.e., ``''``, ``'.'``, ``'NA'``). Defaults to ``'.'``. + + Returns + ------- + None + + Example + ------- + + .. code-block:: + + ################################################################# + # tablefill_readme.txt - Help/Documentation for tablefill.py + ################################################################# + + Description: + tablefill.py is a Python module designed to fill LyX/Tex tables with output + from text files (usually output from Stata or Matlab). + + Usage: + Tablefill takes as input a LyX (or Tex) file containing empty tables (the template + file) and text files containing data to be copied to these tables (the + input files), and produces a LyX (or Tex) file with filled tables (the output file). + For brevity, LyX will be used to denote LyX or Tex files throughout. + + Tablefill must first be imported to make.py. This is typically achieved + by including the following lines: + + ``` + from gslab_fill.tablefill import tablefill + ``` + + Once the module has been imported, the syntax used to call tablefill is + as follows: + + ``` + tablefill(input = 'input_file(s)', template = 'template_file', + output = 'output_file') + ``` + + The argument 'template' is the user written LyX file which contains the + tables to be filled in. The argument 'input' is a list of the text files + containing the output to be copied to the LyX tables. If there are multiple + input text files, they are listed as: input = 'input_file_1 input_file_2'. + The argument 'output' is the name of the filled LyX file to be produced. + Note that this file is created by tablefill.py and should not be edited + manually by the user. + + ########################### + Input File Format: + ########################### + + The data needs to be tab-delimited rows of numbers (or characters), + preceeded by `