Hourly hydrometeorological data was collected over the 30-year period from 1984-2014 in Upper Sheep Creek, within the Reynolds Creek Experimental Watershed, Idaho, USA. These data were used to calibrate the one-dimensional Simultaneous Heat and Water (SHAW) model. These data and the SHAW calibration have previously been described in multiple publications, particularly Chauvin et al 2011 and Flerchinger et al 2016. In the dataset presented here, climate scenarios have been constructed, applied to the historic record, simulated in the SHAW model, and hydrologic results have been analyzed. These data include the following: (1) uscData. These are the historical data described above, prepared for input into the SHAW model with three hydrologic response units (HRU), labeled “aspen”, “highsage”, and “lowsage.” (2) synthetic_data: These directories contain synthetic data representing climate scenarios, with corresponding names. Climate scenarios were constructed in the script, “climate_analyze.R.” Climate scenarios input to this script were obtained from the Integrated Scenarios project: http://climate.nkn.uidaho.edu/IntegratedScenarios/. Historic (1950-2005) and future (2050-2099, RCP 8.5) climate projections were obtained for the grid cell containing Upper Sheep Creek. Seasonally variable daily mean changes in temperature and precipitation were derived by comparing historical and future data; these were stored as “delta functions” (in delta.csv). For each seasonally variable scenario, the effective mean annual change in precipitation or temperature was calculated, and this was applied in the same amount for each day of the water year. The naming convention is as follows: txpydz, where “x” is the temperature deviation from the delta function identified by comparing historical data with 2050-2099 RCP8.5. “y” is the precipitation deviation from the original delta function. X or y values equal to NA indicate that the precipitation or temperature was not changed. “z” indicates whether the simulation is seasonally constant (z = 1) or variable (z = 0). An empirically-derived drift function was applied to each of these scenarios in order to determine effective precipitation after snow drifting (Chauvin et al 2011). (3) model_runs: This directory contains all the files necessary to run the SHAW model. For more information on the SHAW model inputs and outputs, see ShawUsers.30.docx, or ftp site: ftp://ftp.nwrc.ars.usda.gov/public/ShawModel/. In these scenarios, the .exe files named for each HRU contain the names of the .inp files. Providing these .exe files as an input to SHAW allows for SHAW to be run automatically from an R script. The script, “run_shaw_windows.R” contains the code needed to run SHAW. To use this code, we recommend reading the script and changing the directory paths at the beginning of it to suit your needs. This script takes several hours to run on a laptop computer, particularly for the aspen HRU (~10 minutes per scenario on MacbookPro 2015 with 2.7GHz Intel Core i5 and 8GB RAM running Windows in Parallels). (4) model_output: These folders contain the outputs of the SHAW model after runs with the scenarios as described above. Within each HRU directory, each folder is named for one scenario, and contains the output files corresponding to that scenario. The output files are named only after the type of file (e.g., water.out). For that reason, the directory structure provided here is critical to the interpretability of these data. This folder also contains a hydrologic response unit not included in “synthetic_data.” In the “as” folder, aspen HRU site characteristics, including weather, have been used, but aspen have been replaced with grasses to simulate the effects of aspen mortality. (5) Tabular_results: This folder contains .csv files that summarize the data contained in the model_output folder over different time periods and for different analysis objectives. Each of these .csv files is created within the workflow described in the R script, “run_shaw.R”. (6) R: This folder contains scripts used to run the analyses for this project. The script “run_shaw.R” is a heavily commented script that functions as a recipe for the entire analysis; going through the directions in run_shaw.R should enable you to reproduce the analysis in this project.