PHASE 2: MODEL DEVELOPMENT AND EVALUATION
CLOUD BASED DISASTER RECOVERY SYSTEM
ABSTRACT:
Disaster recovery of information has become an essential service for both
individuals and businesses for securing data and availing backup in the event of a
disaster. This project aims to develop a cloud based recovery system for data to
protect data against undesirable events and preserve continuity.
In several important areas, cloud-based recovery systems outperform conventional
on-premises solutions in terms of features. They offer cost-effectiveness through
subscription models, scalability without upfront investments, and remote
accessibility for continuous operations. They guarantee data security and company
continuity with strong dependability, automated backups, and improved disaster
recovery. Their appeal is further enhanced by their robust security features and
simplified maintenance, which make them an excellent option for modern
businesses.
This involves creating one or more standby databases to mirror primary database
containing data such as files and applications from on-premises systems. Once data
reaches the cloud, it is stored in redundant storage locations so as to ensure recovery
and security of data. In the event that one or more data centres are affected by
calamities or hardware breakdowns, this redundancy helps to guarantee data
durability and availability.
 To sum up, cloud-based data recovery solutions provide a simple and effective way
to protect data and ensure business continuity. Through the use of redundant
infrastructure and remote servers, these systems offer scalable, reasonably
priced and resilient protection against disasters and data loss. Cloud-based recovery
systems, with their advanced features like safe storage, automated backups, and
flexible recovery choices, enable businesses and individuals to confidently navigate
the digital landscape.
SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
      1. Hard Disk- 256GB
      2.RAM- 8GB or higher
      3. Computer configured as a server to ensure regular data backup
SOFTWARE REQUIREMENTS:
      1. Integrated Developments Environments such as Visual Studio Code or
      PyCharm.
      2. Python anywhere. (PaaS)
      3. Google Cloud Platform (GCP) Credentials.
TOOLS AND VERSIONS:
      1.   Python: Version 3.x (e.g., Python 3.6, Python 3.7, Python 3.8)
      2.   MySQL: Version 5.7
      3.   PostgreSQL: Version 10 or later
      4.   SQLite: Version 3 or later
      5.   Spyder 5.4.0
  FLOWCHART:
CODE IMPLEMENTATION:
     import os
     import shutil
     import datetime
     import subprocess
     import zipfile
     from google.oauth2.credentials import Credentials
     from google_auth_oauthlib.flow import InstalledAppFlow
     from google.auth.transport.requests import Request
     # Backup and restore directories
     BACKUP_DIR = r'C:\Users\91741\project-directory\backup_script'
     RESTORE_DIR = r'C:\Users\91741\project-directory\restore_script'
     # Google Drive authentication scopes
     SCOPES = ['https://www.googleapis.com/auth/drive.file']
     def backup_database():
       # Database connection details
  DB_HOST = 'localhost'
  DB_USER = 'your_db_username'
  DB_PASSWORD = 'your_db_password'
  DB_NAME = 'your_db_name'
  # Backup directory
  os.makedirs(BACKUP_DIR, exist_ok=True)
   # Generate backup file name
   backup_file = os.path.join(BACKUP_DIR,
f'{DB_NAME}_backup_{datetime.datetime.now().strftime("%Y%m%d%
H%M%S")}.sql')
  # Command to perform database backup using mysqldump
  command = f'mysqldump -h {DB_HOST} -u {DB_USER} -
p{DB_PASSWORD} {DB_NAME} > {backup_file}'
  # Execute the command
  subprocess.run(command, shell=True)
  print(f"Database backup saved to {backup_file}")
def backup_files():
    # Create backup directory if it doesn't exist
    os.makedirs(BACKUP_DIR, exist_ok=True)
    # Zip files to backup directory
    shutil.make_archive(os.path.join(BACKUP_DIR,
f'files_backup_{datetime.datetime.now().strftime("%Y%m%d%H%M%S
")}'), 'zip', r'C:\Users\91741\project-directory')
def restore_database():
  # Database connection details
  DB_HOST = 'localhost'
  DB_USER = 'your_db_username'
  DB_PASSWORD = 'your_db_password'
  DB_NAME = 'your_db_name'
  # Restore directory
  os.makedirs(RESTORE_DIR, exist_ok=True)
   # List backup files
   backup_files = [file for file in os.listdir(BACKUP_DIR) if
file.endswith('.sql')]
  # Get the latest backup file
  latest_backup = max(backup_files, key=os.path.getctime)
   # Command to restore database using mysql
   command = f'mysql -h {DB_HOST} -u {DB_USER} -
p{DB_PASSWORD} {DB_NAME} < {os.path.join(BACKUP_DIR,
latest_backup)}'
  # Execute the command
  subprocess.run(command, shell=True)
  print(f"Database restored from {latest_backup}")
def restore_files():
  # Backup directory
  BACKUP_DIR = r'C:\Users\91741\project-directory\backup_script'
  # Restore directory
  RESTORE_DIR = r'C:\Users\91741\project-directory\restore_script'
   # List backup files
   backup_files = [file for file in os.listdir(BACKUP_DIR) if
file.endswith('.zip')]
  # Iterate through backup files and extract them
  for file in backup_files:
     with zipfile.ZipFile(os.path.join(BACKUP_DIR, file), 'r') as zip_ref:
        zip_ref.extractall(RESTORE_DIR)
  print("Files restored successfully")
        def authenticate():
          creds = None
          if os.path.exists('token.json'):
             creds = Credentials.from_authorized_user_file('token.json', SCOPES)
          if not creds or not creds.valid:
             if creds and creds.expired and creds.refresh_token:
                creds.refresh(Request())
             else:
                flow =
        InstalledAppFlow.from_client_secrets_file('credentials.json', SCOPES)
                creds = flow.run_local_server(port=0)
             with open('token.json', 'w') as token:
                token.write(creds.to_json())
          return creds
        def main():
          backup_database()
          backup_files()
          restore_database()
          restore_files()
          authenticate()
        if __name__ == '__main__':
           main()
PROJECT HURDLES:
            Initially, we struggled to comprehend the OAuth 2.0 authentication
protocol for Google Drive integration, since phrases like "authorization code" and
"refresh token" were puzzling. Handling file paths in Python, especially on
Windows, proved difficult owing to backslash difficulties, requiring the usage of
raw string literals (r'...'). Additionally, encountering limits with Google Drive API
quotas during testing and development was a challenge, resulting in API failures
when exceeded. Choosing appropriate database backup solutions and integrating
the disaster recovery solution into existing infrastructure required careful analysis
and effective coordination with other teams to ensure compatibility and seamless
workflow integration.
OUTPUT:
Database backup saved to C:\Users\91741\project-
directory\backup_script\your_db_name_backup_20220520123456.sql
Files backup saved to C:\Users\91741\project-
directory\backup_script\files_backup_20220520123500.zip
Database restored from your_db_name_backup_20220520123456.sql
Files restored successfully
This recovery solution is to be implemented in the following website