Use the generalized inversion technique to estimate stress drop for the 2019 Ridgecrest sequence. Original data is downloaded from Community Stress Drop Validation Study. gmprocess is used to download station files, event files, and then run QA on original data. Code modified from Klimasewski et al. (2019) is then used to run GIT.
Working directory henceforth denoted as ~/
QA processing takes place in ~/gmprocess
GIT takes place in ~/dataset_name
gmprocess:
~/gmprocess
|
+-- event_downloads
| |
| +-- conf
| | |
| | +-- config.yml
| +-- data
|
+-- qa_processing
|
+-- conf
| |
| +-- config.yml
+-- datadataset:
~/dataset_name
|
+-- event_files
|
+-- station_data
|
+-- RC_beta
|
+-- RC_phase_beta
|
+-- eventid.phase /GitHub/stress_drop/
|
+-- set_up
| |
| +-- cp_event_files.py
| |
| +-- cp_eventjson.py
| |
| +-- cp_events.py
| |
| +-- cp_stn_files.py
| |
| +-- create_event_dirs.py
| |
| +-- h5_to_mseed.py
|
+-- station_info
| |
| + stations.py
|
+-- event_info
| |
| + event_info.py
|
+-- GIT
|
+-- step1_compute_spectra.py
|
+-- step2_secondo_meters.py
|
+-- step3_findBrune_trapezoids.py
|
+-- step4_secondo_constraint.py
|
+-- step5_fitBrune.py
- Use create_event_dirs.py to create event directories for the dataset in both:
a.~/gmprocess/event_downloads/data/
b.~/dataset/RC_beta/ - Use
cp_events.pyto copy original event data from its directory into the~/gmprocess/qa_processingdirectory. - Download
event.jsonfiles
a. Enter~/gmprocess/event_downloadsdirectory and initialize gmprocess project.
b. Setconfig.ymlfile downloader to very small radius in degrees (may have to do this more than once, weird stuff goes on with these *.yml files.
c. Run>>gmrecordsdownload.
d. This will download event data for each event into its raw folder and anevent.jsonfile for each event. All we need from this download is thatevent.jsonfile. - Use
cp_eventjson.pyto copy event.json files fromevent_downloadsto their corresponding event directory in~/gmprocess/qa_processing/data. - Use
cp_stn_files.pyto copy station*.xmlfiles from~/gmprocess/station_downloads/station_filesinto each event raw directory in~/gmprocess/qa_processing/data. - Process dataset:
a. Enter~/gmprocess/qa_processing/and initialize gmprocess.
b. Set upconfig.ymlfile as desired for QA.
c. Run>>gmrecords assemble.
d. Run>>gmrecords process.
e. Result is a*.h5file in each event directory containing all waveforms and information about whether they passed our tests. - Use
extract_tr.pyto read*.h5files and write all waveforms that passed QA screening to event directories in~/dataset/processing/. NOTE: This python script must be run INSIDE an instance of gmprocess from command line! - Use
cp_event_files.pyto copy event*.txtfiles into~/dataset/event_files. - Use
stations.pyto create*.csvfile containing list of stations, locations, and how many times each station appears in the dataset. Records from stations that appear less than 3 times will be discarded in a later step. This file goes in~/dataset/station_data. - Inversion:
a. Usestep1_comput_spectra.pyto compute fft of data in~/dataset/RC_beta/and save them to~/dataset/record_spectra/.
b. Usestep2_secondo_meters.pyto obtain event and site spectra and save them to~/dataset/Andrews_inversion/.
c. Usestep3_findBrune_trapezoids.pyto find the most "Brune-like" (in shape) event spectrum to use as an amplitude constraint and save it to~/dataset/constraint/.
d. Usestep4_secondo_constraint.pyto apply constraint to all event and site spectra and save them to~/dataset/Andrews_inversion_constrained/.
e. Usestep5_fitBrune.pyto fit each event spectrum to the Brune model using nonlinear least squares, finding a best fit corner frequency and moment and saving the results in~/dataset/stress_drops/stress_drops_dataset.csv. - Analyze results as desired.