PHASE 3: COMMUNICATION AND FUTURE
EXPLORATION
    CLOUD BASED DISASTER RECOVERY SYSTEM
ABSTRACT:
Disaster recovery of information has become an essential service for both
individuals and businesses for securing data and availing backup in the event of a
disaster. This project aims to develop a cloud based recovery system for data to
protect data against undesirable events and preserve continuity.
In several important areas, cloud-based recovery systems outperform conventional
on-premises solutions in terms of features. They offer cost-effectiveness through
subscription models, scalability without upfront investments, and remote
accessibility for continuous operations. They guarantee data security and company
continuity with strong dependability, automated backups, and improved disaster
recovery. Their appeal is further enhanced by their robust security features and
simplified maintenance, which make them an excellent option for modern
businesses.
This involves creating one or more standby databases to mirror primary database
containing data such as files and applications from on-premises systems. Once data
reaches the cloud, it is stored in redundant storage locations so as to ensure recovery
and security of data. In the event that one or more data centres are affected by
calamities or hardware breakdowns, this redundancy helps to guarantee data
durability and availability.
 To sum up, cloud-based data recovery solutions provide a simple and effective way
to protect data and ensure business continuity. Through the use of redundant
infrastructure and remote servers, these systems offer scalable, reasonably priced
and resilient protection against disasters and data loss. Cloud-based recovery
systems, with their advanced features like safe storage, automated backups, and
flexible recovery choices, enable businesses and individuals to confidently navigate
the digital landscape.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
      1. Hard Disk- 256GB
      2.RAM- 8GB or higher
      3. Computer configured as a server to ensure regular data backup
SOFTWARE REQUIREMENTS:
      1. Integrated Developments Environments such as Visual Studio Code or
         PyCharm.
      2. Python anywhere. (PaaS)
      3. Google Cloud Platform (GCP) Credentials.
TOOLS AND VERSIONS:
      1. Python: Version 3.x (e.g., Python 3.6, Python 3.7, Python 3.8)
      2. MySQL: Version 5.7
      3. PostgreSQL: Version 10 or later
      4. SQLite: Version 3 or later
      5. Spyder 5.4.0
  FLOWCHART:
CODE IMPLEMENTATION:
     import os import
     shutil import
     datetime import
     subprocess
     import zipfile
     from google.oauth2.credentials import Credentials from
     google_auth_oauthlib.flow import InstalledAppFlow
     from google.auth.transport.requests import Request
     # Backup and restore directories
     BACKUP_DIR = r'C:\Users\91741\project-directory\backup_script'
     RESTORE_DIR = r'C:\Users\91741\project-directory\restore_script'
# Google Drive authentication scopes
SCOPES = ['https://www.googleapis.com/auth/drive.file']
def backup_database():
  # Database connection details
  DB_HOST = 'localhost'
  DB_USER = 'your_db_username'
  DB_PASSWORD = 'your_db_password'
  DB_NAME = 'your_db_name'
  # Backup directory
  os.makedirs(BACKUP_DIR, exist_ok=True)
   # Generate backup file name
backup_file = os.path.join(BACKUP_DIR,
f'{DB_NAME}_backup_{datetime.datetime.now().strftime("%Y%m%d%
H%M%S")}.sql')
  # Command to perform database backup using mysqldump
command = f'mysqldump -h {DB_HOST} -u {DB_USER}
p{DB_PASSWORD} {DB_NAME} > {backup_file}'
  # Execute the command
  subprocess.run(command, shell=True)
  print(f"Database backup saved to {backup_file}")
def backup_files():
  # Create backup directory if it doesn't exist
os.makedirs(BACKUP_DIR, exist_ok=True)
   # Zip files to backup directory
   shutil.make_archive(os.path.join(BACKUP_DIR,
f'files_backup_{datetime.datetime.now().strftime("%Y%m%d%H%M%S
")}'), 'zip', r'C:\Users\91741\project-directory')
def restore_database():
  # Database connection details
  DB_HOST = 'localhost'
  DB_USER = 'your_db_username'
 DB_PASSWORD = 'your_db_password'
DB_NAME = 'your_db_name'
  # Restore directory
  os.makedirs(RESTORE_DIR, exist_ok=True)
   # List backup files
   backup_files = [file for file in os.listdir(BACKUP_DIR) if
file.endswith('.sql')]
  # Get the latest backup file
  latest_backup = max(backup_files, key=os.path.getctime)
   # Command to restore database using mysql command =
f'mysql -h {DB_HOST} -u {DB_USER} p{DB_PASSWORD}
{DB_NAME} < {os.path.join(BACKUP_DIR, latest_backup)}'
  # Execute the command
  subprocess.run(command, shell=True)
  print(f"Database restored from {latest_backup}")
def restore_files():
  # Backup directory
  BACKUP_DIR = r'C:\Users\91741\project-directory\backup_script'
  # Restore directory
  RESTORE_DIR = r'C:\Users\91741\project-directory\restore_script'
  # List backup files
   backup_files = [file for file in os.listdir(BACKUP_DIR) if
file.endswith('.zip')]
  # Iterate through backup files and extract them
for file in backup_files:
        with zipfile.ZipFile(os.path.join(BACKUP_DIR, file), 'r') as zip_ref:
          zip_ref.extractall(RESTORE_DIR)
  print("Files restored successfully")
def authenticate():        creds =
None                             if
os.path.exists('token.json'):
     creds = Credentials.from_authorized_user_file('token.json', SCOPES)
if not creds or not creds.valid:  if creds and creds.expired and
creds.refresh_token:
          creds.refresh(Request())
else:
          flow =
InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES)
creds = flow.run_local_server(port=0)     with open('token.json', 'w')
as token:
        token.write(creds.to_json())
return creds
def main():
backup_database()
backup_files()
restore_database()
restore_files()
  authenticate()
if __name__ == '__main__':
main()
PROJECT HURDLES:
            Initially, we struggled to comprehend the OAuth 2.0 authentication
protocol for Google Drive integration, since phrases like "authorization code" and
"refresh token" were puzzling. Handling file paths in Python, especially on
Windows, proved difficult owing to backslash difficulties, requiring the usage of
raw string literals (r'...'). Additionally, encountering limits with Google Drive API
quotas during testing and development was a challenge, resulting in API failures
when exceeded. Choosing appropriate database backup solutions and integrating
the disaster recovery solution into existing infrastructure required careful analysis
and effective coordination with other teams to ensure compatibility and seamless
workflow integration.
OUTPUT:
Database backup saved to C:\Users\91741\project-
directory\backup_script\your_db_name_backup_20220520123456.sq
l Files backup saved to
C:\Users\91741\projectdirectory\backup_script\files_backup_202205
20123500.zip Database restored from
your_db_name_backup_20220520123456.sql Files restored
successfully
This recovery solution is to be implemented in the following website
CONCLUSION:
The initiative shows how cloud-based disaster recovery systems may assist both individuals
and companies. Scalability, affordability, and improved accessibility are among its
 benefits. A dependable answer to contemporary data management issues is offered by the
system’s strong security features, automated backups, and adaptable recovery alternatives.
 The technology works flawlessly with the current infrastructure in spite of obstacles
like OAuth 2.0 authentication and API quota restrictions.
FUTURE SCOPE:
Better security features, support for multiple cloud providers, incremental and differential
backup strategies, real-time backup and synchronization, automated disaster recovery
testing, a user-friendly interface, scalable architecture, machine learning integration,
0 regulatory compliance, and a mobile application are the key components of the
cloud-based disaster recovery system of the future. For users that experience data loss, these
enhancements will improve the system’s functionality, security, and dependability.
Additionally, the system will be built to abide by HIPAA and GDPR requirements,
making it appropriate for sectors of the economy that have strict data protection laws.
Additionally, the system will make remote data management convenient.
REFERENCES:
1.Mendona, D., Wallace, W.A.: Studying organizationally-situated improvisation in
response to extreme events. International Journal of Mass Emergencies and Disasters 22(2)
(2004)
-Google Scholar
2.Habiba, M., Akhter, S.: MAS Workflow Model and Scheduling Algorithm for Disaster
Management System. In: Proceedings of the 1st International Conference on Cloud
 Computing Technologies, Applications and Management, ICCCTAM 2012, Dubai, UAE
(December 2012)
-Google Scholar
3.Kazusa, S.: Director for Disaster Management, Cabinet office, Government of Japan.
Disaster Management of Japan (2011)
-Google Scholar
4.Nazrov, E.: Emergency Response management in Japan, Final Research report, ASIAN
Disaster Reduction Center, FY2011A Program (2011)
-Google Scholar
5.Pandey, S., Karunamoorthy, D., Buyya, R.: Workflow Engine for Clouds. In: Buyya, R.,
Broberg, J., Goscinski, A. (eds.) Cloud Computing: Principles and Paradigms.
Wiley Press, New York (2011) ISBN-13: 978-0470887998
-Google Scholar
6.Yoshizaki, M.: Disaster Management and Cloud Computing in Japan, Report from
Ministry of International Affair and Communication (December 2011)
-Google Scholar
7.An Analytical Overview. Asian Disaster Reduction Center (March 2007)
-Google Scholar
8.Disaster Management System Srilanka, http://www.desinventar.lk/ (last visited
 November 11, 2012)
9.ArcGIS as a System for Emergency/Disaster Management,
http://www.esri.com/industries/public-safety/emergency-disaster-management/arcgis-system
(last visited November 11, 2012)
10.Alazawi, Z., Altowaijri, S., Mehmood, R., Abdljabar, M.B.: Intelligent disaster
management system based on Cloud-enabled vehicular networks. In: Proceedings of
International Conference on ITS Telecommunications (ITST), August 23-25, pp. 361–368
(2011)
11.https://thesai.org/Downloads/Volume11No9/Paper_84-
Disaster_Recovery_in_Cloud_Computing_Systems.pdf
12.https://www.nakivo.com/blog/disaster-recovery-in-cloud-computing/
13.https://link.springer.com/chapter/10.1007/978-3-642-38027-3_16