(prefer binder for inline webapp)
The analysis component of Csaransh software project.
- gcc 5.3 or later or equivalent c++14 compiler.
- cmake
- Python 3.x or Python 2.7 or above, can be added with miniconda or conda.
- Go to anuvikar directory.
- Make a new directory
_build. - Go to this directory and run
cmake ..and thencmake --build .
The executable anuvikar can be found in the _build directory. You can run the tests from the anuvikar directory by running the anuvikar_test application that gets built in the _build directory, the command would look like ./_build/anuvikar_test.
You will need to install required python packages to run it on your system. For that,
- Go to anuvikar directory.
- run
conda env create -f environment.ymlandconda activate csaranshif you are using conda, or you can usepip install -r requirements.txt.
- Go to anuvikar directory.
- Switch to csaransh conda environment if using conda with:
conda activate csaransh.
python avi_validate_cdb.py $pathToXyzArchiveDir $pathToOutputDir ...pathToXmlMetafiles.- The first argument is a path to the directory that has arhived xyz files as downloaded / stored in CascadesDB. The directory can have many archives but only the ones that correspond to the metafiles given in argument three onwards will be analysed.
- The second argument is path to the output directory. Archives will be extracted to this directory. Also, the processed files
anuvikar.json,anuvikar.dbandlog.txtwill be stored here. Since the output files have same name, please take care that outputs are not overwritten by multiple runs. - The third argument onwards can be multiple Xml metafiles to process. These can by given as
pathToMetaFileDir/*.xml. - This command may take some time, you can ignore the warnings and runtimewarnings on the output console. There can be errors like corrupt archive, unable to unzip etc.
- Go through log.txt in the output directory. Search for errors and warnings. Files with errors are not analysed. These can be looked upon and discussed with the author before adding to Cdb.
- An example run command:
python avi_validate_cdb /data/W/newEntries/ /data/W/newEntries /data/W/newEntries/*xml. Here we have archives and xml files in the same directory and we want output files to be written in the same directory. - Another example run command:
python avi_validate_cdb /data/W/allArchives/ /data/W/newEntries /data/W/newEntries/*xml. Here we have storing all the old and new archives in the same directoryallArchives. Directory for new xml files and output files to be written are the same.
- Run
python avi_add_cdb.py $new_output_dir $destination_db_path $existing_database_path: - First argument: Provide the output directory of the
avi_validate_cdb.pywhich hasanuvikar.jsonandanuvikar.dbfiles for the cascades that you wish to add to the database. - Second argument: File path for the output db.
- Third argument: File path for the existing db if exists.
- The destination db path (second argument) is the database file that can be copied to
$csaransh_dir$/src/db/dev.csaransh.dbto view the updated database. - The command generates another file which is
destination_db_path+_tree.picklewhich needs to be kept with the db file. This will be used by avi_add_cdb.py while further adding more data (when passing this new db as the third argument).
- Fresh database:
- Run avi_validate_cdb for the archives and corrensponding meta files. Provide the output directory of this command to avi_add_cdb. For example:
python avi_validate_cdb /data/W/newEntries/ /data/W/newEntries /data/W/newEntries/*xmlpython avi_add_cdb /data/W/newEntries/ /data/W/newEntries/csaransh.dbcp /data/W/newEntries/csaransh.db ./src/db/dev.csaransh.db- Now you can open the webpage (directions given later in this readme) to view your new entries on csaransh-webapp.
- Adding new data to the earlier processed db:
- Let us say that you want to add new data to the db we created in the last step.
python avi_validate_cdb /data/W/newerEntries/ /data/W/newerEntries /data/W/newerEntries/*xmlpython avi_add_cdb /data/W/newerData/ /data/W/newerData/csaransh.db /data/W/newEntries/csaransh.dbcp /data/W/newerEntries/csaransh.db ./src/db/dev.csaransh.db- Now you can open the webpage (directions given later in this readme) to view your newer entries along with the new entries added earlier.
-
One line to get all the cascade results in a python variable
isSuccess, cascades = queryCdbToProcess(dataDir, config, material="W", energyRange=[2, 3], tempRange=["", 1500])which is ready to be explored with python or opened in interactive web-app loaded with interactive plots.Starting from selecting cascades of interest based on different criterion (energy range, temperature, author etc.), to download, processing, visualizations, to statistics based on different grouping, it is all automated.
-
Web-app with loads of features to explore the data.
A feature loaded table view with column selection, multi-sort, reflective filters that provide a way to query the data quickly. If you select a filter, other filters reflect the options left.
Quickly view and compare the statistics of different parameters such as defects count, cascade volume etc. split on any combination of input fields such as grouped by energy & elements, temperature, potential etc.
You can check the web page for the project (link) and read the manual to go through the different sections. The web page shows the results for the MD database of IAEA Challenge on Materials for Fusion 2018.
Publications and Talks:
- SaVi defect morphology algorithm: paper
- Potential comparison based on morphology using SaVi: paper
- MoD-PMI 2021 talk on SaVi algorithm and its applications presentation
- Unsupervised morphology classification and searching similar defects: paper
- MoD-PMI 2019 talk on defect identification algorithm used and classification of clusters presentation.
- For citation of Csaransh use JOSS paper.
The initial version was submitted to the IAEA Challenge on Materials for Fusion 2018 as an entry from Bhabha Atomic Research Center (BARC) by Utkarsh Bhardwaj, Ashok Arya, Harsh Hemani, Nancy Semwal and Manoj Warrier. Many thanks to the team, the challenge organizers and the judges for their encouraging words.