Cloud Report Final
Cloud Report Final
A PROJECT REPORT
Submitted by
BALAJI P C - 822121104007
of
BACHELOR OF ENGINEERING
IN
RAJAMADAM – 614701.
                               i
               ANNA        UNIVERSITY : CHENNAI 600 025
BONAFIDE CERTIFICATE
Certified that this project report titled "HYBRID PRIVACY AND TRUST
PRESERVATION MODEL FOR CLOUD ENVIRONMENT" is the bonafide work
of "BALAJI P C(822121104007), H MOHAMED MEERAN(822121104021)" of
Computer Science and Engineering who carried out this project work under my
supervision.
SIGNATURE SIGNATURE
                                        ii
                        ACKNOWLEDGEMENT
We think the almighty who shows blessing to make this project proposal as reality.
Our heartfelt thanks to our Head of the Department Dr.M.G.KAVITHA for the prompt
and limitless help in providing the excellent infrastructure to do the project and to
prepare the thesis.
We express our deep sense of gratitude to our project supervisor Dr.R.KIRUBA BURI
for his invaluable support, guidance and encouragement for successful completion of
this project. His vision and Spirit will always inspire and enlighten us.
We thank all the teaching and non-teaching staff members of the Department of
Computer Science and Engineering for this valuable help and guidance We are grateful
to our family and friends for their constant support and encouragement.
                                            iii
     A Privacy-Preserving Framework for Cloud Data Access Using the Data
                                       Concealment Model
Abstract
Cloud computing has significantly impacted organizational operations by providing ondemand access
to resources, yet cross-organizational data sharing remains a challenge due to the need for mutual
agreement on how data is processed. Data protection becomes critical in cloud computing, where
organizations must trust that others comply with data-handling agreements and regulations. Given that
cloud data is highly sensitive, robust protection mechanisms are essential to ensure confidentiality and
security during data access. This project introduces the Data Concealment Model, which enhances data
protection in cloud storage by safeguarding access patterns. The model integrates four innovative
cloaking methods: LongTerm Cloaking, Multi-Region Based Cloaking, Time-based Cloaking, and
Geolocation-based Cloaking. These techniques work together to detect and differentiate between
legitimate users and bots, ensuring that only authorized users access benign content while unauthorized
users receive disguised content, thereby preventing malicious intrusions. Additionally, the project
employs the Camouflage Data Disguise technique that combines Chaffing and Winnowing with the
ChaCha20 encryption algorithm to securely disguise content for unauthorized access attempts. This
model not only ensures data confidentiality, location-based access control, and global consistency but
also simplifies certificate and key management, reducing system workload. By addressing critical data-
sharing challenges, the proposed model offers a secure, privacy-preserving solution for cloud storage,
streamlining security infrastructure and facilitating seamless, protected data access across
organizations.
                                                   v
                       TABLE OF CONTENTS
C. NO                       TITLE          PAGE NO
        ABSTRACT                              I
        LIST OF FIGURES
        LIST OF TABLES
        LIST OF ABBRIVATION
1       INTRODUCTION
        1.1. Overview
        1.2. Problem Statement
        1.3. Cloaking
        1.4. Aim And Objective
        1.5. Scope Of The Project
2       LITERATURE SURVEY
3       SYSTEM ANALYSIS
        3.1. Existing System
        3.2. Proposed System
4       SYSTEM REQUIREMENTS
        4.1. Hardware Requirements
        4.2. Software Requirements
5       SYSTEM DESIGN
        5.1. Input Design
        5.2. Output Design
        5.3. Database Design
6       SYSTEM ARCHITECTURE
        6.1. System Architecture
        6.2. Data Flow Diagram
        6.3. UML Diagram
               6.3.1. Use Case Diagram
               6.3.2. Activity Diagram
               6.3.3. Sequence Diagram
        6.4. Table Design
7       SYSTEM IMPLEMENTATION
        7.1. Project Description
                                    vi
     7.2. Modules Description
            7.2.1. Cloud Service Provider Web App
            7.2.2. End User Interface
            7.2.3. Cloaking Wall Model
            7.2.4. Access Policy Configurator
            7.2.5. Bot Identification And Data Distribution
            7.2.6. Disguise Data Generator
            7.2.7. Monitoring And Auditing
            7.2.8. Alerts And Notification
8    SYSTEM TESTING
     8.1. Software Testing
     8.2. Test Cases
     8.3. Test Report
     8.4. Software Description
           8.4.1. Python 3.8
           8.4.2. MySQL 5
           8.4.3. WampServer 2i
           8.4.4. Bootstrap
           8.4.5. Flask
9    CONCLUSION
10   FUTURE ENHANCEMENT
     APPENDIX
     A. Source Code
     B. Screenshots
     BIBILIOGRAPHY
     Journal References
     Book References
     Web References
                                      vii
                          LIST OF FIGURES
                                     viii
                        LIST OF TABLES
TABLE NO.   TABLE                         PAGE NO
6.5.1       Admin Login
6.5.2       Cc: Data Owner Register
6.5.3       Cc: File Uploaded Details
6.5.4       Cc: Long Term Wall Model
6.5.5       Cc: Geolocation Wall Model
6.5.6       Cc: Time Wall Model
6.5.7       Cc: Region Wall Model
6.5.8       Cc_File Access Log
6.5.9       Cc: Unauthorised Access Log
6.5.10      Cc: Data User
                                    ix
              LIST OF ABBREVIATION
S.NO   ABBREVIATION         EXPANSION
1      CDN                  Content Delivery Network
2      IAM                  Identity And Access Management
3      GCP                  Google Cloud Platform
4      AWS                  Amazon Web Services
5      APIS                 Application Processing Interfaces
6      UT                   Usability Testing
                       x
                                      CHAPTER 1
                                  INTRODUCTION
1.1. OVERVIEW
An enterprise cloud brings together private, public, and distributed clouds in a unified IT
environment. It offers a centralized control point. From there, businesses can manage enterprise
cloud applications and infrastructure in any cloud. An enterprise cloud provides businesses
with a seamless, consistent, and high-performance experience. Enterprise cloud computing is
the process of using virtualized IT resources such as external servers, processing power, data
storage capacity, databases, developer tools, and networking infrastructure by companies and
organizations. Enterprise cloud solutions help organizations optimize their operations and cut
costs.
                                               1
Think of this as the virtual equivalent of file cabinets and storage rooms. Enterprise cloud
storage is far more advanced than your regular hard drives, offering high-speed access and
superior data redundancy.
    •   Networking
This forms the backbone of the cloud architecture, interconnecting various services and
components. Enterprise cloud networking often includes load balancing, VPNs, and private
subnets for enhanced security and efficiency.
    •   Database Services
These are specialized storage services optimized for handling structured data. They serve as
the backend for applications that require data retrieval and storage capabilities, often in real-
time.
    •   Content Delivery Network (CDN)
A CDN helps in distributing the flow of network traffic across multiple servers, ensuring high
availability and reliability. It’s particularly crucial for businesses that serve a global audience.
    •   Monitoring and Analytics Tools
These tools keep tabs on performance metrics, usage statistics, and security incidents. They are
essential for maintaining optimal performance and security.
    •   Security Services
Enterprise cloud security includes features such as Identity and Access Management (IAM),
encryption, and intrusion detection systems, in addition to basic firewalls.
    •   APIs and SDKs
These are the building blocks that allow for customization and integration with other services
and applications. They offer the means to not only extract more value from your cloud services
but also to integrate them seamlessly into existing workflows.
    •   Orchestration and Automation
These components allow businesses to automate repetitive tasks and orchestrate complex
workflows, enhancing efficiency exponentially.
Start-ups can test out new business ideas risk-free and at low cost, due to enormous scalability.
Since there is no upfront capital expense involved, in case a new project takes off, it can be
scaled up instantly and vice versa. Enterprise cloud computing allows a company to create a
shared workspace in order to collaborate with its trading partners and work together as a ‘virtual
enterprise network’. In this way, they can share information and communication resources,
without actually owning it all. This also helps in lowering costs.
                                                 2
Types of enterprise cloud architecture
In the world of computing and more precisely the enterprise cloud, cloud solutions have
become essential. Businesses need a solution that will allow them to access their data and
applications anytime, anywhere. There are four common models for enterprise cloud, each with
its own advantages and use cases.
   •   Public cloud
Public cloud is a computing model in which cloud infrastructure and services are provided over
a network that is accessible by the general public. All customers can use the services offered
by the vendor. This means that the cloud resources are owned, managed, and operated by a
third-party cloud computing service provider. Examples of such cloud solutions are Google
Cloud Platform (GCP), Amazon Web Services (AWS), and Microsoft Azure.
   •   Private cloud
Private Cloud is exclusively dedicated to a single organization. Only verified users can access
the cloud, as it’s not available to the public. It’s a type of cloud computing that involves
companies creating their own infrastructure, either in-house, or through a third-party provider
who hosts and secures it.
                                              3
   •   Hybrid cloud
Hybrid cloud is a model that combines the use of public and private clouds. Both environments
can communicate and exchange data freely. A hybrid cloud allows organizations to take
advantage of the scalability and cost-effectiveness of the public cloud while maintaining the
security and compliance of sensitive data in a private infrastructure.
   •   Multi-cloud
The multi-cloud model is a more advanced and expanded variant of the hybrid cloud. In this
structure, a company can use more than one provider and freely combine private and public
clouds according to its needs.
                                                4
1.2. PROBLEMS DEFINITION
Rapid globalization of technology and the ever-expanding interconnectedness of the ‘Internet
of Things’ will continue to demand our constant attention to considerations around information
security; all channels, all devices, all the time. Cloud security considerations span a range of
concerns; resource connectivity, user entitlements, data loss prevention, transitory/stationary
data handling and encryption policies, data security classification restrictions, cross-border
information flow...the list goes on. It is not uncommon for information security considerations
to run counter to cloud solution patterns. Without clear information security guidelines
articulated in the Cloud Strategy, it is unlikely that the organization will be well protected from
security risks in the cloud.
                                                5
cloud environment is highly connected and makes it much easier to pass the security measures
at the perimeter. In the case of cloud security, everything is software and the cloud-based
security solutions need to respond to environmental changes through cloud-based management
systems or application processing interfaces (APIs) to cater to the dynamic environment. Due
to the large scale and flexibility of the cloud environment, implementing security solutions in
the cloud environment face numerous challenges. Since the data on the cloud is accessed
outside the corporate networks on numerous occasions maintaining a record of the data access
is difficult. In essence, the problem revolves around the necessity for an advanced and
comprehensive approach to cloud data storage security. The proposed Cloaking Wall Model
aims to provide a holistic solution by addressing these concerns, offering persistent
confidentiality, global consistency, timed access controls, and location-sensitive protection.
Moreover, the model strives to optimize certificate and key management, contributing to a
more streamlined and secure cloud data storage environment.
                                              6
1.3. CLOAKING
The “cyber cloaking” initiative leverages emerging technology that can actually hide (make
invisible) or “cloak” any IP device, server(s) or secure cloud services rendering them invisible
to internal searches, external cyber-hackers, and internet bots. Cloaking prevents leakage of
information or service that is vulnerable to web attacks. HTTP headers and return codes are
concealed before sending a response to a client.
                                                 7
securing these non-production environments while enabling various data users to complete
mission critical work. Camouflage Data Disguise enables organizations to safely use data for
critical business processes without exposing sensitive information. It mitigates the risk of data
breach and non-compliance by de-identifying sensitive data in non-production environments.
Camouflage Data Masking replaces sensitive data with fictional but realistic values that
maintain referential integrity, enabling data driven business processes to operate normally.
Camouflage techniques include concealment, disguise, and dummies. Data Camouflage adds a
light masking to an application’s data: it simply scrambles the data to mask the file on the disk
from casual observation. With this approach, one can protect data on disk from unauthorized
inspection.
Chaffing and Winnowing
Chaffing and Winnowing is a cryptographic algorithm that enhances the security and privacy
of transmitted data by introducing decoy information and subsequently isolating the genuine
content. Here's a more detailed explanation: Chaffing and Winnowing provide a mechanism to
obfuscate data during transmission by blending genuine information with decoy elements and
then selectively extracting the real content using a secure key or algorithm. This technique
contributes to the confidentiality and integrity of transmitted data, particularly in scenarios
where privacy and protection against unauthorized interception are paramount.
Chaffing: Chaffing is the initial step in the process, where decoy or fake data, known as
"chaff," is intentionally added to the actual information being transmitted. The chaff is designed
to mimic the characteristics of the authentic data, making it challenging for unauthorized
entities to discern between the real and the decoy elements. This introduces a level of confusion
and complexity for anyone attempting to intercept or analyse the transmitted information.
Winnowing: Winnowing is the complementary process to chaffing. It involves the selective
separation of the genuine data from the added chaff, using a specific key or algorithm known
only to the intended recipient. The key or algorithm serves as the means to distinguish between
the authentic content and the deliberately introduced decoy elements. By applying this key
during the winnowing process, the recipient can effectively filter out the chaff, revealing the
original and unaltered information.
ChaCha20
ChaCha20 is a symmetric key stream cipher and one of the modern encryption algorithms. It
was designed by Daniel J. Bernstein, and it's a part of the Salsa20 family of stream ciphers.
ChaCha20 is known for its simplicity, speed, and resistance to cryptanalysis. The ChaCha20
encryption algorithm is designed to provide a combination of speed and security. It is
                                                8
constructed to resist known attacks, including differential cryptanalysis and linear
cryptanalysis. Furthermore, it is highly parallelizable, making it easily adaptable to multi-core
processors and other high-performance computing systems.
                                                9
1.4. AIM AND OBJECTIVE
Aim
The aim of the project is to establish an advanced cloud data security framework by designing
a Cloaking Wall Model integrated with camouflage techniques. This framework seeks to
enhance privacy and access control for sensitive data stored in cloud environments.
Objectives
   •   To Develop a Robust Cloaking Wall Model: Create a secure foundation for advanced
       cloud data security.
   •   To Integrate Camouflage Techniques: Implement Camouflage Data Disguise for
       enhanced privacy.
   •   To Ensure Persistent Confidentiality: Fortify data security to prevent unauthorized
       access.
   •   To Achieve Global Consistency and Access Control: Establish mechanisms for timed
       access control and global consistency.
   •   To Implement Location-Sensitive Data Protection: Introduce measures for enhanced
       data protection based on user geography.
   •   To Develop Bot Identification and Targeted Content Distribution: Create a mechanism
       for bot identification and selective content distribution.
   •   To Optimize Certificate and Key Management: Streamline processes to reduce
       workloads and enhance simplicity.
   •   To Address Cross-Organizational Data Sharing Challenges: Provide a robust solution
       for secure cross-organizational data access.
   •   To Reduce Workloads of Certificate Management: Implement measures to streamline
       certificate management.
   •   To Simplify Key Management: Develop strategies for key management simplification
       while ensuring security.
                                                10
1.5. SCOPE OF THE PROJECT
The project's scope is to enhance cloud data security through the implementation of the
Cloaking Wall Model integrated with advanced camouflage techniques. This includes the
development of a robust security framework, leveraging Long-Term Cloaking, Multi-Region,
Time-based, and Geolocation-based Cloaking. The project also explores the integration of
Chaffing and Winnowing with the ChaCha20 encryption algorithm for added privacy. Key
objectives encompass persistent confidentiality, global consistency, timed access control, and
location-sensitive protection. Additionally, the project addresses challenges in cross-
organizational data sharing, integrates a bot identification mechanism, and optimizes certificate
and key management for reduced workloads. The ultimate goal is to contribute to a secure and
privacy-preserving cloud data access environment while ensuring system efficiency and
simplicity.
                                               11
                                        CHAPTER 2
                              LITERATURE SURVEY
2.1. Title: Stargazer: Long-Term and Multiregional Measurement of
Timing/Geolocation-Based Cloaking
Authors: Shota Fujii , Takayuki Sato , Sho Aoki
Year: 2023
Reference Link: https://ieeexplore.ieee.org/document/10138205
Problem: Malicious hosts employing cloaking techniques pose a significant threat in cyber-
attacks, particularly in evading detection by security vendors and researchers. The paper
specifically addresses the challenges posed by timing/geolocation-based cloaking and aims to
understand its characteristics and prevalence across different types of malicious hosts.
Objective:
The primary objective is to investigate and understand the characteristics of cloaking
techniques, with a focus on geofencing and time-based cloaking. The study aims to provide
insights into the longevity of malicious hosts employing cloaking and their ability to evade
existing detection technologies.
Methodology:
The authors implemented Stargazer, an active monitoring system for malicious hosts, to collect
data over an extended period. Stargazer detects both geofencing and time-based cloaking. The
study involves the observation of 18,397 malicious hosts over two years, providing long-term
and multiregional insights.
Algorithm/Techniques:
The specific algorithm or techniques used are not explicitly mentioned in the provided
information. However, Stargazer is mentioned as the tool implemented for active monitoring
and detection of cloaking techniques.
Merits:
Stargazer provides long-term and multiregional observations, enabling the detection of
cloaking techniques that may not be evident in single-site or one-shot observations. The study
contributes to a broader understanding of various cloaking types, including geographic and
temporal cloaking, extending beyond specific threats like phishing. Insights into the longevity
of malicious hosts and their absence from Virus total contribute to the development of future
cloaking detection methods.
Demerits:
                                              12
The specific algorithm or techniques used by Stargazer for monitoring and detection. The
publication year and reference link are not provided, limiting the ability to access additional
details and the latest developments. The study may lack generalizability if the dataset is not
representative of a diverse range of malicious hosts and cyber threats.
                                              13
The limiting the ability to assess the currency and relevance of the information. The lack of a
reference link makes it challenging to verify the paper's source or access additional related
materials. paper does not mention specific details about the dataset used or any real-world
implementation results, potentially limiting the generalizability of the proposed approach. No
information is provided on the scalability of the proposed system or potential challenges in its
practical implementation.
2.3. Title: Toward Automated Security Analysis and Enforcement for Cloud
Computing Using Graphical Models for Security
Author(s): Seongmo An , Asher Leung
Reference Link: https://ieeexplore.ieee.org/document/9828397
Problem:
The paper addresses the security challenges associated with the widespread adoption of cloud
computing. As cloud applications become more prevalent, so do security threat vectors and
vulnerabilities. The paper aims to propose a new security assessment and enforcement tool,
named Cloud Safe, to address these challenges.
Objective:
The primary objective is to develop and demonstrate the applicability and usability of Cloud
Safe, an automated security assessment tool for the cloud. The goal is to provide security
administrators with a tool that automates security assessment and enforces best security
controls for cloud environments.
Methodology:
The Cloud Safe, a security assessment tool for the cloud. The tool collates various security
tools and conducts a security assessment in Amazon AWS. The paper also analyzes four
different security countermeasure options: Vulnerability Patching, Virtual Patching, Network
Hardening, and Moving Target Defence. Proof of concepts is developed to demonstrate the
effectiveness of each feasible countermeasure option.
Algorithm/Techniques:
The employs graphical security models for the security assessment within the CloudSafe tool,
focusing on collating various security tools for automated cloud security analysis. The text
lacks detailed information on specific algorithmic implementations.
Merits:
                                              14
Cloud Safe is presented as an automated security assessment tool for the cloud. The tool aims
to provide optimal security control recommendations based on collating various security tools.
Feasible countermeasure options are analyzed, and proof of concepts is developed.
Demerits:
The specific publication year or reference link for further details. the details about the dataset
used are not provided. the text lacks specific information on the algorithms or techniques
employed in the proposed methodology.
2.4. Title: Secure Data Storage and Sharing Techniques for Data Protection
in Cloud Environments:
Authors: Ishu Gupta, Ashutosh Kumar Singh, Chung-Nan Lee, Rajkumar Buyya
Year: 2022
Reference Link: https://ieeexplore.ieee.org/document/9813692
Problem:
The adoption of cloud environments is widespread due to their advantages, such as minimal
upfront capital investment and maximum scalability. However, the cloud environment poses
several challenges, with data protection being a primary concern in information security and
cloud computing. Despite the development of numerous solutions, there is a lack of
comprehensive analysis among existing solutions, necessitating exploration, classification, and
analysis of significant works to assess their applicability.
Objective:
The paper aims to conduct a comparative and systematic study, providing an in-depth analysis
of leading techniques for secure data sharing and protection in cloud environments. The
objective is to investigate the functionality, potential, and revolutionary aspects of each
technique, along with core information like workflow, achievements, scope, gaps, and future
directions.
Methodology:
The authors conduct a comprehensive and comparative analysis of various techniques for
secure data storage and sharing. Each dedicated technique is discussed, highlighting its
functioning, potential solutions, and essential information. The paper also addresses research
gaps and future directions for each solution.
Algorithm/Techniques:
                                                15
The secure data storage and sharing, including cloud computing, data privacy and security, data
protection, data storage, data sharing, IoT, machine learning, cryptography, watermarking,
access control, differential privacy, and probabilistic approaches.
Merits:
Provides a comprehensive analysis of techniques for secure data storage and sharing in cloud
environments. Highlights the functionality and relevant solutions of each technique. Addresses
research gaps and suggests future directions for each discussed solution. Conducts an
exhaustive analysis and comparison of the referred techniques. Emphasizes the need for
integrating multiple techniques for robust data security.
Demerits
The challenging to access the complete paper. The absence of specific algorithms or techniques
used in the analysis may limit the technical details provided in the paper. No specific dataset
is mentioned, and the practical applicability of the discussed techniques may not be assessed
without such information.
                                               16
cloud computing. The case study methodology is utilized to demonstrate security challenges in
the context of a smart campus scenario.
Algorithm/Techniques:
The paper mentions the use of blockchain technology as a revolutionary solution to improve
security in cloud computing.
Merits:
The analysis of security and privacy challenges in cloud computing. It emphasizes the
importance of efficient security and privacy measures to ensure data integrity, privacy, and
reliability. The inclusion of a smart campus case study adds practical relevance to the research.
Recognition of the role of blockchain technology in enhancing cloud security is highlighted.
Demerits:
The challenging to access the full details of the research. Specific algorithms or techniques
related to the implementation of blockchain technology are not detailed cloud service providers
are not providing enough security but does not provide specific examples or evidence absence
of a mentioned dataset limits the reproducibility and validation of the findings.
                                               17
provide multi-class encryption while allowing computationally expensive sparse signal
recovery at the cloud, without compromising data privacy.
Methodology:
The MPCC scheme uses compressive sensing to perform computationally expensive sparse
signal recovery in a privacy-preserving manner. Three variants of the MPCC scheme are
designed for statistical decryption of smart meters, data anonymization of electrocardiogram
signals, and images. The scheme achieves two-class secrecy, catering to superusers and semi-
authorized users, with the latter obtaining statistical data or signals without sensitive
information.
Algorithm/Techniques:
Compressive sensing (CS) is employed as a key technique to recover sparse and compressible
signals using fewer measurements, providing efficient sampling and compact representation.
The MPCC scheme utilizes CS for multi-class encryption, enabling the cloud to perform
computationally expensive sparse signal recovery.
Merits:
Lower computational complexity at IoT sensor devices and data end-users compared to state-
of-the-art schemes. Efficient handling of multi-level data privacy using compressive sensing.
Performance improvement in terms of storage, encoding, decoding, and data transmission.
Theoretical security analysis proves the computational infeasibility of breaking the proposed
MPCC scheme. MPCC is shown to be secure against ciphertext-only attacks.
Demerits:
The does not provide information on the publication year or a reference link for further details.
Limited information on the specific algorithms or technical details of the proposed MPCC
scheme. No information on potential limitations or challenges faced during the implementation
of the MPCC scheme.
                                               18
As the use of biometric authentication has increased, the integration of biometric data into
cloud computing raises concerns about privacy and security. Database holders are inclined to
transfer large volumes of biometric information to the cloud to reduce storage and processing
costs. However, this exposes potential risks to users' privacy.
Objective:
The efficient and privacy-protecting biometric authentication strategy for cloud computing.
Specifically, the goal is to encrypt and securely transfer biometric information to the cloud,
perform recognition tasks on the encrypted data, and ensure confidentiality during the entire
authentication process.
Methodology:
The encrypting biometric information before transferring it to the Cloud database. During a
biometric verification process, the server holder encrypts the inquiry information, sends it to
the cloud, where recognition tasks are performed on the encrypted data. The conclusion is then
sent back to the server holder. The system includes a systematic security assessment to ensure
protection against potential attacks on detection appeals and collusion through the cloud.
Algorithm/Techniques:
The novel encryption procedure and cloud verification guarantee to protect the confidentiality
of biometric data during the authentication process. Specific algorithms and techniques used
for encryption and cloud-based recognition are not explicitly mentioned.
Merits:
Efficient and privacy-protecting strategy: The proposed system aims to address the challenges
of securely transferring and processing biometric information in the cloud while ensuring user
privacy. Enhanced performance: According to the paper, the introduced strategy demonstrates
improved performance in both preparation and discovery measures compared to previous
protocols.
Demerits:
Lack of specific algorithms provide detailed information on the specific encryption procedures
and algorithms used, making it challenging to assess the technical aspects thoroughly. The
absence of a reference link hinders access to additional resources or validation of the research
through external sources.
                                               19
2.8. Title:An Integrated Knowledge Graph to Automate Cloud Data
Compliance
Author: Karuna Pande Joshi, Lavanya Elluri, Ankur Nagar
Year: 2020
Reference Link: https://ieeexplore.ieee.org/document/9139461
Problem:
Data protection concerns in the cloud have led to the emergence of various regulations and
guidelines worldwide. However, these regulations are often not available in a machine-
processable format, requiring significant manual effort from service providers to adhere to
them. Additionally, overlapping rules and lack of referencing between regulations result in
duplicated compliance efforts.
Objective:
The data protection concerns by developing an integrated, semantically rich knowledge graph
that captures various data compliance regulations related to cloud data. The objective is to
automate compliance processes for organizations and enhance enterprise cloud security
policies.
Methodology:
The data protection regulations applicable to cloud data. They developed a knowledge graph
that encompasses data threats and security controls necessary to mitigate risks. The paper
presents the knowledge graph and describes the system developed to evaluate it. Validation
against privacy policies of cloud service providers like Amazon, Google, IBM, and Rackspace
was performed.
Algorithm/Techniques:
The focuses on developing a semantically rich knowledge graph and evaluating it against the
privacy policies of major cloud service providers.
Merits:
Development of an integrated knowledge graph for cloud data compliance. Semantically rich
representation capturing various data compliance regulations. Validation against privacy
policies of major cloud service providers. Availability of the knowledge graph in the public
domain for organizations to automate compliance processes.
Demerits:
                                              20
The Lack of specific details on algorithms or techniques employed. Limited information on the
dataset used for validation. The scope of the study is focused on security compliance, with
potential room for expansion into other IT compliance models.
                                                  21
Demerits:
The adaptive multivariable control strategy are not detailed, limiting the reproducibility and
understanding of the proposed approach. The dataset used in the experiments, making it
difficult to assess the generalizability of the results.
                                                  22
Demerits:
The IBEET-FA scheme is based on bilinear pairing, which is computationally expensive. The
paper suggests that future work could explore constructing IBEET-FA schemes without relying
on bilinear pairing, indicating a potential limitation of the current approach.
                                               23
                                      CHAPTER 3
                                SYSTEM ANALYSIS
3.1. EXISTING SYSTEM
The existing system of cloud outsourced data protection encompasses various mechanisms and
practices implemented to safeguard data stored in the cloud environment. Here is an overview
of key elements in the current landscape:
    •      Encryption:
Encryption is a fundamental component of data protection in the cloud. It involves the use of
cryptographic algorithms to convert data into a secure format that can only be accessed with
the appropriate decryption key. Both data at rest and data in transit are typically encrypted to
prevent unauthorized access.
    •      Access Controls and Identity Management:
Robust access controls and identity management systems are implemented to regulate who can
access data in the cloud. This involves assigning and managing user roles, permissions, and
authentication mechanisms to ensure that only authorized individuals or systems can interact
with sensitive information.
    •      Firewalls and Network Security:
Network security measures, including firewalls, are employed to protect the cloud
infrastructure. Firewalls monitor and control incoming and outgoing network traffic based on
predetermined security rules. This helps prevent unauthorized access and potential cyber
threats.
    •      Regular Audits and Monitoring:
Continuous monitoring and regular audits of cloud environments are conducted to identify and
respond to security incidents promptly. This involves tracking user activities, system events,
and potential vulnerabilities, providing a proactive approach to addressing security concerns.
                                              24
3.1.1. Disadvantages
   •   Encryption complexity may impact system performance.
   •   Traditional authentication methods may be vulnerable to attacks.
   •   Difficulty in navigating complex data protection regulations and ensuring compliance.
   •   Heavy reliance on cloud service providers, with potential vulnerabilities in their
       security practices impacting user data.
   •   Constraints in tailoring security measures to unique organizational requirements.
   •   Service disruptions, maintenance, or cyber-attacks leading to temporary loss of access
       to data.
   •   Secure protocols may be inconsistently implemented
                                                 25
3.2. PROPOSED SYSTEM
The proposed system endeavours to revolutionize cloud data security by introducing a
sophisticated Cloaking Wall Model specifically designed to safeguard organizational
operations in the ever-evolving landscape of cloud computing. Addressing the inherent
challenges of cross-organizational data sharing, the system prioritizes the development of
advanced security measures. Among these measures are leakage-suppressed access controls,
ensuring that data remains confidential and shielded from unauthorized exposure. Additionally,
lightweight access controls are implemented to streamline user authentication and
authorization processes, minimizing computational overhead. The proposed system
incorporates cutting-edge encryption techniques to ensure persistent confidentiality of data
stored in the cloud, encompassing data at rest, in transit, and during processing. To enhance
global consistency in access controls, the system adopts a unified policy approach, addressing
challenges associated with multi-region data access. Furthermore, time-based access controls
are implemented, allowing organizations to enforce temporal restrictions on data access and
fortify security by limiting access to predefined time windows. Location-sensitive protection
mechanisms are also introduced, providing an additional layer of security through geofencing
and tailored policies based on user locations. The Cloaking Wall Model incorporates four
distinct methods to fortify data security and protect the access patterns of stored data. These
methods are strategically designed to address various aspects of security challenges in cloud
data storage:
   •   Long-Term Cloaking
This method focuses on providing extended protection for sensitive data over prolonged
durations. It involves concealing access patterns and data usage trends over an extended
timeframe, ensuring persistent confidentiality. Long-term cloaking contributes to maintaining
the privacy and security of stored data over extended periods, preventing unauthorized
inference from patterns of access.
   •   Multi-Region Based Cloaking
Recognizing the global nature of cloud services, multi-region-based cloaking involves
implementing security measures that transcend geographical boundaries. By considering the
diverse locations from which data access may occur, this method ensures a consistent and
standardized security posture globally. It addresses the challenges associated with data access
from different regions, providing a unified approach to access control policies.
                                              26
   •   Time-Based Cloaking
Time-based cloaking introduces temporal restrictions on data access, allowing organizations to
define specific time windows during which data can be accessed. This method enhances
security by limiting access to predefined timeframes, reducing the exposure of data to potential
threats. Time-based cloaking adds an additional layer of control to access patterns, contributing
to a more secure cloud data storage environment.
   •   Geolocation-Based Cloaking
Geolocation-based cloaking involves tailoring data protection measures based on the
geographical location of users. This method adds a location-sensitive layer of security, ensuring
that access to sensitive data is contingent on the user's physical location. Geolocation-based
cloaking is particularly relevant for organizations with diverse and dispersed user bases,
providing a customized approach to access control based on geographic parameters.
These four methods collectively form the foundation of the Cloaking Wall Model, contributing
to persistent confidentiality, global consistency, timed access control, and location-sensitive
protection. By integrating these techniques, the model aims to address critical challenges in
cross-organizational data sharing, offering a robust solution for secure and privacy-preserving
cloud data access.
   •   Camouflage Data Disguise
Camouflage Data Disguise technique represents an advanced cryptographic approach that
seamlessly integrates Chaffing and Winnowing with the formidable ChaCha20 encryption
algorithm. This technique is strategically designed to provide disguised data as a
countermeasure against unauthorized access, targeting both unpermitted users and potentially
malicious bots.
                                               27
3.2.1. Advantages
   •   Ensures data confidentiality at rest, in transit, and during processing.
   •   Provides a standardized security posture globally.
   •   Distinguishes between authorised and unauthorised user for targeted content
       distribution.
   •   Reduces administrative workloads through efficient certificate management.
   •   Provides an advanced layer of data privacy, ensuring that sensitive information remains
       confidential during transmission.
   •   Minimizes the risk of unauthorized access
                                               28
                                 CHAPTER 4
                      SYSTEM REQUIREMENTS
4.1. HARDWARE REQUIREMENTS
    •   Processor: Intel Core i5 or higher (recommended)
    •   RAM: 8GB or more
    •   Storage: SSD with at least 256GB capacity
    •   Network Interface: Ethernet or Wi-Fi adapter
    •   Monitor: Minimum resolution of 1280x800 pixels
    •   Input Devices: Keyboard and mouse
                                        29
                                      CHAPTER 5
                                 SYSTEM DESIGN
5.1. INPUT DESIGN
   1. User Authentication Input: Enables secure user access through username, password,
       and optional multi-factor authentication methods like OTP or biometric verification.
   2. Data Management Input: Facilitates efficient cloud storage management with
       features such as file upload, data category specification, and access control settings for
       managing permissions.
   3. User Management Input: Allows administrators to manage users effectively by
       adding new users with detailed information, modifying user permissions, and removing
       users as needed.
   4. Access Policy Configuration Input: Provides flexibility in defining access policies
       with options for time-based restrictions, geolocation-based permissions, and multi-
       region access rules.
   5. Monitoring and Auditing Input: Ensures transparency and accountability through
       user activity logs detailing login attempts, data access, and modifications, along with
       system logs capturing operations, errors, and warnings.
   6. Bot Identification Input: Enhances security measures by analyzing access patterns
       and configuring anomaly detection parameters to identify and mitigate suspicious
       behavior.
   7. Disguise Data Generation Input: Strengthens data protection with tools for simulating
       data under specified parameters and configuring encryption settings to safeguard
       sensitive information.
   8. Alerts and Notifications Input: Keeps stakeholders informed through configurable
       alert settings for triggering notifications on threshold breaches or policy violations via
       preferred channels like email, SMS, or in-app notifications.
By integrating these input functionalities, the project aims to deliver a robust solution that
meets operational requirements, enhances security measures, and supports effective
management of user access and data protection in cloud environments.
                                              30
5.2. OUTPUT DESIGN
   1. User Authentication Output: Provides feedback such as a login success message upon
       successful authentication or a login failure message for unsuccessful attempts, ensuring
       users are informed of their access status.
   2. Data Management Output: Confirms file uploads with an upload confirmation
       message and organizes uploaded data visibly by category or type, facilitating efficient
       management and retrieval.
   3. User Management Output: Confirms actions like user additions with a confirmation
       message and updates users about modifications to roles and permissions, ensuring
       transparent user management processes.
   4. Access Policy Configuration Output: Confirms successful configuration of access
       policies with a configuration confirmation message and alerts administrators with
       policy enforcement alerts when violations occur, ensuring adherence to established
       policies.
   5. Monitoring and Auditing Output: Displays user activity logs, system logs, and
       anomaly detections in tabular or graphical formats, providing administrators with
       comprehensive insights into system operations and historical data access.
   6. Bot Identification Output: Alerts administrators with detection alerts upon identifying
       potential bot activity and visualizes bot access patterns for detailed analysis, enhancing
       system security against automated threats.
   7. Disguise Data Generation Output: Displays simulated data instances and their
       characteristics, confirming successful encryption of disguised data to strengthen data
       protection measures.
   8. Alerts and Notifications Output: Provides alerts or notifications regarding policy
       violations, security threats, or system errors, along with a record of sent notifications
       and their delivery status, ensuring timely response and action.
By integrating these output functionalities, the project aims to enhance transparency, security,
and operational efficiency, providing users and administrators with clear and actionable
information to support informed decision-making and ensure smooth system operation.
                                              31
5.3. DATABASE DESIGN
  1. User Table
        o   Fields: user_id (Primary Key), username, password_hash, email, role_id
            (Foreign Key), created_at, updated_at.
        o   Stores user credentials and metadata including username, hashed password,
            email address, associated role, and timestamps for record creation and updates.
        o   This table supports user authentication and management functionalities, linking
            users to their respective roles for access control purposes.
  2. Role Table
        o   Fields: role_id (Primary Key), role_name.
        o   Defines roles within the system, facilitating role-based access control (RBAC)
            by assigning specific permissions to each role.
        o   Allows administrators to assign roles to users, determining their access levels
            and privileges throughout the application.
  3. Data Table
        o   Fields: data_id (Primary Key), data_name, file_path, owner_id (Foreign Key),
            created_at, updated_at.
        o   Stores details about uploaded data including data name, file path, owner
            identification, and timestamps of creation and updates.
        o   Enables efficient data management by providing a structured repository for
            storing and retrieving data files uploaded to the cloud storage.
  4. Access Policy Table
        o   Fields:     policy_id     (Primary       Key),      policy_name,     description,
            long_term_cloaking_enabled,                       multi_region_cloaking_enabled,
            time_based_cloaking_enabled,              geolocation_based_cloaking_enabled,
            created_at, updated_at.
        o   Defines access policies governing data access based on various parameters such
            as time restrictions, geographical locations, and cloaking preferences.
        o   Facilitates dynamic access control by configuring policies that dictate how and
            when users can access sensitive data, ensuring compliance with security
            requirements.
  5. User Access Table
                                            32
      o   Fields: user_access_id (Primary Key), user_id (Foreign Key), data_id (Foreign
          Key), access_permission, created_at, updated_at.
      o   Tracks user-specific permissions for accessing specific data files, recording
          access permissions and timestamps of permission updates.
      o   Supports fine-grained access control by linking users to the data they are
          authorized to access, enforcing data security and confidentiality measures.
6. Bot Activity Table
      o   Fields: bot_activity_id (Primary Key), user_id (Foreign Key), access_time,
          access_pattern, is_bot_detected, created_at.
      o   Logs suspected bot activities including user IDs, access times, access patterns,
          detection status, and timestamps of activity detection.
      o   Enhances security measures by monitoring and identifying automated bot
          activities, triggering alerts for potential security threats.
7. Access Log Table
      o   Fields: log_id (Primary Key), user_id (Foreign Key), data_id (Foreign Key),
          access_time, access_type, access_result, created_at.
      o   Records user interactions with data files including access times, types of access
          (read/write), access results, and timestamps of log entries.
      o   Provides a comprehensive audit trail of data access activities, aiding in
          compliance audits and forensic investigations.
8. Alerts Table
      o   Fields:   alert_id (Primary Key),          user_id   (Foreign   Key),   alert_type,
          alert_message, alert_time, is_read, created_at.
      o   Stores alerts triggered by system events such as policy violations, security
          incidents, or operational errors, recording alert types, messages, timestamps,
          and read status.
      o   Notifies administrators and users about critical events requiring attention or
          action, ensuring timely response and resolution of issues.
9. Data Masking Table
      o   Fields: masking_id (Primary Key), data_id (Foreign Key), masking_algorithm,
          masked_data, created_at, updated_at.
      o   Manages data masking configurations including masking algorithms applied to
          sensitive data fields, storing masked data representations and timestamps of
          masking operations.
                                           33
             o   Enhances data security by obfuscating sensitive information while maintaining
                 usability and compliance with privacy regulations.
    10. Configuration Table
             o   Fields: config_id (Primary Key), config_name, config_value, created_at,
                 updated_at.
             o   Stores system configuration settings such as feature toggles, operational
                 parameters, and customization options, recording configuration names, values,
                 and timestamps of changes.
             o   Facilitates system customization and maintenance by providing a centralized
                 repository for managing and adjusting system settings according to operational
                 needs and requirements.
This database design supports the project's objectives by ensuring robust data management,
secure access control, comprehensive auditing capabilities, proactive threat detection, and
flexible system configuration. It establishes a solid foundation for implementing the project's
functionalities while maintaining scalability, reliability, and data integrity throughout its
lifecycle.
                                               34
                                  CHAPTER 6
                     SYSTEM ARCHITECTURE
6.1. SYSTEM ARCHITECTURE
                               Login
                                                            Long Term Cloaking
                            Upload Data
                                                           Region based Cloaking
Cloaking Area
                                                                               Malicious
                                                                                    User
                                          35
6.2. DATA FLOW DIAGRAM
LEVEL 0
                         36
LEVEL 1
          37
LEVEL 2
          38
6.3. UML DIAGRAM
6.3.1. USE CASE DIAGRAM
                          39
6.3.2. ACTIVITY DIAGRAM
                          40
6.3.3. SEQUENCE DIAGRAM
                          41
6.3. TABLE DESIGN
                                      Admin Login
 S.No          Field     Data Type       Field size    Constraint      Description
 1      username       Varchar           20            Null         Admin Username
 2      password       Varchar           20            Null         Admin Password
                                  6.3.1. Admin Login
                                             42
                                        CC: Data User
S.No         Field         Data Type        Field size     Constraint         Description
1      id                 Int(11)           11             Null           Unique Id
2      owner id           Varchar(20)       20             Foreign key    Owner id
3      name               Varchar(20)       20             Null           user Name
4      gender             Varchar(20)       20             Null           user gender
5      dob                Varchar(20)       20             Null           User dob
6      mobile             Bidint(20)        20             Null           User Mobile
7      email              Varchar(40)       40             Null           user Email
8      location           Varchar(30)       30             Null           User location
9      user id            Varchar(20)       20             Primary Key    user id
10     password           Varchar(20)       20             Null           user password
                                        6.3.3. Data User
                                                 43
                          CC: Long Term Wall Model
S.No         Field      Data Type     Field size     Constraint         Description
1       id             Int(11)        11             Primary Key    Unique Id
2      Owner id        Varchar(20)    20             Foreign key    Owner id
3      User id         Varchar(20)    20             Null           User id
4      File id         Int(11)        11             Null           File id
5      Shared date     Timestamp      Null           Null           File shared date
                          6.3.5. Long Term Wall Model
                                           44
                              CC: Region Wall Model
S.No         Field         Data Type       Field size    Constraint        Description
1       id                Int(11)          11            Null          Unique Id
2      Owner id           Varchar(20)      20            Foreign key   Owner id
3      User id            Varchar(20)      20            Null          User id
4      Region             Varchar(30)      30            Null          Region Name
5      Location           Varchar(100) 100               Null          Location
6      File id            Int(11)          11            Null          File id
7      Shared date        Time stamp       Null          Null          File share date
                              6.3.8. Region Wall Model
                                            45
                                      CHAPTER 7
                        SYSTEM IMPLEMENTATION
7.1. PROJECT DESCRIPTION
The project focuses on advancing cloud data security through the implementation of a
comprehensive Cloaking Wall Model integrated with advanced camouflage techniques within
a Cloud Consumer Web App. This web application consists of various interconnected modules
designed to streamline cloud resource management while ensuring robust security measures.
The user authentication module serves as a secure entry point, employing multi-factor
authentication for enhanced security. The dashboard module, at the core of the application,
provides an intuitive interface for users to manage cloud resources comprehensively.
Integrated with the Cloaking Wall Model, this module ensures global consistency in security
measures. The End User Interface comprises distinct modules for Admins or Data Owners and
Data Users. Admins can securely log in, add/manage data, users, provide login credentials, set
access policies using the Cloaking Wall Model, and monitor data access. Data Users, on the
other hand, access allocated data and monitor their own data access. The Cloaking Wall Model
itself consists of Long-Term Cloaking, Multi-Region based Cloaking, Time-Based Cloaking,
and Geolocation-Based Cloaking modules, providing advanced data protection and access
control. The Access Policy Configurator empowers administrators to define and customize
access policies based on these principles. The Bot Identification and Data Distribution module
ensures the identification of automated bots and selectively distributes content based on policy
adherence. The Disguise Data Generator Employs Chaffing and Winnowing with ChaCha20
encryption, generating malicious data for non-compliant users. Monitoring and Auditing
capture real-time activities, and the Alerts and Notification module provides immediate alerts
for policy violations. This project aims to create a secure and user-friendly Cloud Consumer
Web App with advanced data protection features, contributing to enhanced privacy and access
control in cloud data storage.
                                              46
7.2. MODULES DESCRIPTION
7.2.1. Cloud Service Provider Web App
The design and development of a Cloud Consumer Web App involve several interconnected
modules, each contributing to the seamless and efficient management of cloud resources. The
user authentication module serves as the entry point, ensuring secure access through robust
registration and authentication processes, including multi-factor authentication for enhanced
security. The heart of the application lies in the dashboard module, providing an intuitive
interface for users to oversee and manage their cloud resources comprehensively. This module
encompasses features for resource provisioning, scaling, and configuration, offering a
centralized hub for users to interact with their cloud services seamlessly. The dashboard
module, at the core of the application, seamlessly integrates the Cloaking Wall Model,
providing users with an intuitive interface for comprehensive cloud resource management
while ensuring global consistency in security measures. The monitoring and alert module is
essential for empowering users with real-time insights into the performance of their cloud
resources and timely notifications of any irregularities.
                                               47
   •   Provide Login Credentials to Users:
Admins can generate and distribute login credentials for users added to the system. This ensures
a secure onboarding process for new users.
   •   Set Access Policy using Cloaking Wall Model:
Leveraging the Cloaking Wall Model, this module enables Admins to set access policies.
Admins can define Long-Term Cloaking, Multi-Region based Cloaking, Time-based Cloaking,
and Geolocation-based Cloaking to enhance data security.
   •   Monitoring Data Access:
Admins can monitor and audit data access patterns using this module. It provides insights into
who accessed specific data, when, and from which location, contributing to overall security
and compliance.
2.2. Data User Interface:
   •   Login Module:
Similar to the Admin interface, the login module provides secure authentication for Data Users,
ensuring that only authorized individuals can access the cloud resources.
   •   Access Data:
Data Users can use this module to access the data allocated to them. The interface provides a
user-friendly environment for retrieving, modifying, or analysing data based on their
permissions.
   •   Monitoring Data Access:
Data Users have limited access to monitoring tools to track their own data access. This module
allows them to review their activity and ensures transparency in usage.
                                              48
        3.2. Multi-Region Based Cloaking Module:
This module facilitates a unified security approach across diverse geographical regions.
Admins can define access controls that transcend geographic boundaries, ensuring consistent
security measures globally and addressing challenges related to multi-region data access.
        3.3. Time-Based Cloaking Module:
Time-Based Cloaking empowers Admins to set temporal restrictions on data access. This
module enhances security by allowing the definition of specific time windows during which
data can be accessed, adding an extra layer of control over temporal access patterns.
        3.4. Geolocation-Based Cloaking Module:
The Geolocation-Based Cloaking module tailor’s data protection based on user location.
Admins can define security policies that vary depending on the physical location of Data Users,
adding a location-sensitive layer to access controls.
                                               49
    •   Access Pattern Deviation:
The mechanism continually monitors user access patterns based on the access policies defined
in the Cloaking Wall Model. If an entity exhibits access patterns that deviate significantly from
the established policies, it raises suspicion for potential bot activity.
    •   Policy Adherence Assessment:
Each user, including potential bots, is assessed against the defined access policies. Legitimate
Data Users are expected to follow the specified Long-Term Cloaking, Multi-Region based
Cloaking, Time-Based Cloaking, and Geolocation-Based Cloaking rules. Any entity not
adhering to these policies is flagged for further scrutiny.
    •   Anomalous Access Timing:
Bots often operate on predefined schedules or exhibit unnatural timing patterns. The
mechanism detects anomalous access timings that do not align with the specified time-based
access policies. This helps identify bots attempting to access data outside permissible time
windows.
    •   Geolocation Inconsistencies:
The mechanism evaluates the geolocation of incoming requests in comparison to the
Geolocation-Based Cloaking policies. If there are inconsistencies, such as requests originating
from unexpected or restricted locations, it signals potential bot activity.
    •   Rapid, Repetitive Access Attempts:
Bots typically attempt to access data rapidly and repetitively, following scripted sequences.
The mechanism identifies patterns associated with rapid, repetitive access attempts, flagging
entities that display such behaviour for further scrutiny.
                                                 50
countermeasure against unauthorized access, targeting both unpermitted users and potentially
malicious bots. By doing so, it ensures that the distribution of potentially sensitive or malicious
content is restricted to the intended, permissioned users.
   •      Chaffing and Camouflage Process:
The module orchestrates a two-fold process: Chaffing, which adds decoy or chaff data, and
Camouflage, which further disguises non-compliant data through additional obfuscation
techniques. This combined approach ensures a multi-layered defense against unauthorized
access.
   •      ChaCha20 Encryption:
Genuine, chaff, and camouflaged data undergo encryption using the ChaCha20 algorithm.
ChaCha20's strength in providing a secure and efficient encryption process contributes
significantly to safeguarding the confidentiality of the disguised information.
   •      Winnowing Process:
At the recipient's end, the winnowing process disentangles the genuine data from the chaff and
camouflage layers. The ChaCha20 decryption algorithm, combined with the appropriate key,
unveils the original, non-compliant data.
                                                51
6.2.8. Alerts and Notification
Upon detection of a policy violation, the module triggers immediate alerts, swiftly notifying
administrators through various channels such as email, SMS, in-app messages, or other
preferred communication methods. These alerts are designed to provide administrators with
detailed information about the nature of the policy violation, offering insights into unauthorized
access attempts, data modifications beyond permitted levels, or breaches of temporal and
geolocation constraints. This proactive alerting mechanism minimizes the latency between the
occurrence of a policy violation and the notification to administrators, facilitating a rapid and
targeted response.
                                               52
                                      CHAPTER 8
                                 SYSTEM TESTING
8.1. SOFTWARE TESTING
System testing for the project would encompass various types of testing to ensure the
robustness, functionality, and security of the system. Here are some key types of testing that
would be relevant:
8.1.1. TYPES OF TESTING
   1. Functional Testing: This type of testing verifies that each function of the system
       operates in accordance with the requirements specified in the design documents. It
       includes testing features like user authentication, data management, access control
       policies, and monitoring capabilities.
   2. Integration Testing: Integration testing ensures that individual components of the
       system work together seamlessly as a whole. It validates interactions between different
       modules, APIs, and external systems, including the integration of the Cloaking Wall
       Model with the Cloud Consumer Web App.
   3. Performance Testing: Performance testing assesses the responsiveness, scalability,
       and stability of the system under various load conditions. It includes testing the system's
       ability to handle concurrent users, process requests efficiently, and maintain acceptable
       response times.
   4. Security Testing: Security testing evaluates the system's resilience against potential
       security threats and vulnerabilities. It includes testing for authentication mechanisms,
       encryption protocols, access control measures, data masking techniques, and protection
       against common cyber threats like SQL injection and cross-site scripting (XSS).
   5. Usability Testing: Usability testing focuses on assessing the system's user interface
       (UI) design, navigation flow, and overall user experience. It involves gathering
       feedback from end-users to identify any usability issues, accessibility concerns, or areas
       for improvement in terms of user interaction and interface design.
   6. Compatibility Testing: Compatibility testing ensures that the system functions
       correctly across different platforms, browsers, devices, and operating systems. It
       verifies that the application is compatible with popular web browsers, mobile devices,
       and screen resolutions, ensuring a consistent user experience across diverse
       environments.
                                                53
7. Regression Testing: Regression testing validates that recent code changes or
   enhancements do not introduce new defects or regressions in existing functionality. It
   involves retesting previously validated features and conducting automated regression
   test suites to ensure that the system remains stable and reliable after updates.
8. Load Testing: Load testing evaluates the system's performance under expected and
   peak load conditions. It involves simulating a high volume of concurrent users or data
   requests to assess how the system handles stress, scalability, and resource utilization.
9. Stress Testing: Stress testing assesses the system's behavior under extreme conditions
   beyond its normal operational capacity. It involves pushing the system to its limits to
   identify potential failure points, bottlenecks, or areas of weakness under heavy load,
   unexpected inputs, or adverse environmental conditions.
10. Data Integrity Testing: Data integrity testing verifies the accuracy, consistency, and
   reliability of data stored and processed by the system. It includes testing data validation
   rules, data manipulation operations, and data integrity constraints to ensure that data
   remains intact and error-free throughout its lifecycle.
                                           54
8.2. TEST CASES
  1. User Authentication:
        •   Test Case ID: UA_TC_001
        •   Input: Valid username and password with correct multi-factor authentication.
        •   Expected Result: Successful authentication, granting access to the system.
        •   Actual Result: User successfully authenticated, and access granted.
        •   Status: Pass
  2. Dashboard Access:
        •   Test Case ID: DA_TC_001
        •   Input: Successful authentication credentials.
        •   Expected Result: Access to the dashboard module.
        •   Actual Result: User gained access to the dashboard.
        •   Status: Pass
  3. Cloaking Wall Model Integration:
        •   Test Case ID: CW_TC_001
        •   Input: Accessing the dashboard module.
        •   Expected Result: Integration of the Cloaking Wall Model into the dashboard.
        •   Actual Result: Cloaking Wall Model successfully integrated.
        •   Status: Pass
  4. Access Policy Configuration:
        •   Test Case ID: APC_TC_001
        •   Input: Admin accessing the Access Policy Configurator.
        •   Expected Result: Successful configuration of access policies.
        •   Actual Result: Access policies configured without errors.
        •   Status: Pass
  5. Data Management by Admins:
        •   Test Case ID: DMA_TC_001
        •   Input: Admin uploading and managing data.
        •   Expected Result: Successful organization and control of data.
        •   Actual Result: Admin successfully managed and organized data.
        •   Status: Pass
  6. User Management by Admins:
        •   Test Case ID: UMA_TC_001
                                          55
      •   Input: Admin adding and managing users.
      •   Expected Result: Successful addition and management of users.
      •   Actual Result: Admin added and managed users without issues.
      •   Status: Pass
7. Data Access by Data Users:
      •   Test Case ID: DAU_TC_001
      •   Input: Data User accessing allocated data.
      •   Expected Result: Successful access to designated data.
      •   Actual Result: Data User accessed allocated data successfully.
      •   Status: Pass
8. Monitoring Data Access:
      •   Test Case ID: MDA_TC_001
      •   Input: Admin or Data User accessing monitoring tools.
      •   Expected Result: Insightful tracking of data access patterns.
      •   Actual Result: Monitoring tools provided insightful data access patterns.
      •   Status: Pass
9. Bot Identification Mechanism:
      •   Test Case ID: BIM_TC_001
      •   Input: Monitoring user access patterns.
      •   Expected Result: Accurate identification of potential bot activity.
      •   Actual Result: Bot Identification Mechanism accurately identified potential bot
          activity.
      •   Status: Pass
10. Disguise Data Generation:
      •   Test Case ID: DDG_TC_001
      •   Input: Generating disguised data.
      •   Expected Result: Successful simulation of non-compliant data.
      •   Actual Result: Disguise Data Generator successfully simulated non-compliant
          data.
      •   Status: Pass
11. Monitoring and Auditing:
      •   Test Case ID: MA_TC_001
      •   Input: Monitoring various system activities.
      •   Expected Result: Detailed logs of policy enforcement and anomaly detections.
                                         56
       •   Actual Result: Monitoring and Auditing module maintained detailed logs as
           expected.
       •   Status: Pass
12. Alerts and Notifications:
       •   Test Case ID: AN_TC_001
       •   Input: Policy violation or security threat occurrence.
       •   Expected Result: Immediate and informative alerts to administrators.
       •   Actual Result: Alerts and Notifications module triggered alerts promptly.
       •   Status: Pass
                                          57
8.3. TEST REPORT
Introduction
The Test Report provides a comprehensive overview of the testing activities conducted for the
Advanced Cloud Data Security System. This report aims to summarize the results of the testing
phase, including the status of each test case and an overall assessment of the system's
functionality.
Test Objective
The primary objective of the testing phase was to evaluate the performance, reliability, and
functionality of the Advanced Cloud Data Security System. Specific goals included validating
user authentication, assessing data access controls, and ensuring the successful integration of
security features.
   •   To verify the functionality of user authentication and authorization mechanisms.
   •   To validate the implementation of access control policies based on the Cloaking Wall
       Model.
   •   To assess the system's ability to manage data securely and enforce data access policies
       effectively.
   •   To evaluate the system's resilience against security threats and vulnerabilities.
   •   To ensure that the system performs reliably under different scenarios and workloads.
Test Scope
The testing scope covered various modules within the system, including user authentication,
dashboard access, Cloaking Wall Model integration, access policy configuration, data
management, monitoring, and security features such as bot identification and disguise data
generation.
   •   Testing of user authentication, including login, registration, and multi-factor
       authentication.
   •   Testing of access control mechanisms, such as role-based access control and policy
       enforcement.
   •   Testing of data management functionalities, including data upload, storage, retrieval,
       and deletion.
   •   Testing of security features, such as encryption, data masking, and intrusion detection.
   •   Performance testing to assess system responsiveness, scalability, and resource
       utilization.
                                              58
Test Environment
The testing environment was set up with the following components:
       Hardware:
           •   Processor: Intel Core i5-9400F CPU @ 2.90GHz
           •   RAM: 8GB DDR4
           •   Storage: 256GB SSD
           •   Network Interface Card: Gigabit Ethernet
       Software:
           •   Operating System: Windows 10 Home
           •   Web Browser: Google Chrome, Mozilla Firefox
           •   Database Management System: MySQL 8.0
           •   Web Server: WampServer 3.2.0
Test Result
The following table outlines the results of each test case conducted during the testing phase:
   •   User Authentication: Successful authentication and access granted.
   •   Access Control: Access policies enforced based on Cloaking Wall Model.
   •   Data Management: Secure and efficient data management operations.
   •   Security Testing: Resilience against security threats and vulnerabilities.
   •   Performance Testing: Reliable performance under different scenarios.
Bug Report
A bug report is a document that details issues, defects, or unexpected behavior encountered in
software during testing or usage. It typically includes information about the problem, steps to
reproduce it, and any relevant system configurations. Bug reports are essential for developers
to identify and fix issues in the software.
                                                 59
Test Conclusion
The testing phase concludes with an overall positive assessment of the Advanced Cloud Data
Security System. The majority of test cases have been successfully executed, meeting expected
results. Identified issues were minimal and addressed promptly. The system is deemed ready
for deployment with necessary enhancements.
                                             60
8.4. SOFTWARE DESCRIPTION
8.4.1. PYTHON 3.8
Python is a general-purpose interpreted, interactive, object-oriented, and high-level
programming language. It was created by Guido van Rossum during 1985- 1990. Like Perl,
Python source code is also available under the GNU General Public License (GPL). This
tutorial gives enough understanding on Python programming language.
                                              61
Pandas
pandas is a fast, powerful, flexible and easy to use open source data analysis and manipulation
tool, built on top of the Python programming language. pandas is a Python package that
provides fast, flexible, and expressive data structures designed to make working with
"relational" or "labeled" data both easy and intuitive. It aims to be the fundamental high-level
building block for doing practical, real world data analysis in Python.
Pandas is mainly used for data analysis and associated manipulation of tabular data in Data
frames. Pandas allows importing data from various file formats such as comma-separated
values, JSON, Parquet, SQL database tables or queries, and Microsoft Excel. Pandas allows
various data manipulation operations such as merging, reshaping, selecting, as well as data
cleaning, and data wrangling features. The development of pandas introduced into Python
many comparable features of working with Data frames that were established in the R
programming language. The panda’s library is built upon another library NumPy, which is
oriented to efficiently working with arrays instead of the features of working on Data frames.
NumPy
NumPy, which stands for Numerical Python, is a library consisting of multidimensional array
objects and a collection of routines for processing those arrays. Using NumPy, mathematical
and logical operations on arrays can be performed.
                                              62
Matplotlib is a plotting library for the Python programming language and its numerical
mathematics extension NumPy. It provides an object-oriented API for embedding plots into
applications using general-purpose GUI toolkits like Tkinter, wxPython, Qt, or GTK.
Scikit Learn
scikit-learn is a Python module for machine learning built on top of SciPy and is distributed
under the 3-Clause BSD license.
Scikit-learn (formerly scikits. learn and also known as sklearn) is a free software machine
learning library for the Python programming language. It features various classification,
regression and clustering algorithms including support-vector machines, random forests,
gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python
numerical and scientific libraries NumPy and SciPy.
                                             63
8.4.2 MYSQL
MySQL is a relational database management system based on the Structured Query Language,
which is the popular language for accessing and managing the records in the database. MySQL
is open-source and free software under the GNU license. It is supported by Oracle Company.
MySQL database that provides for how to manage database and to manipulate data with the
help of various SQL queries. These queries are: insert records, update records, delete records,
select records, create tables, drop tables, etc. There are also given MySQL interview questions
to help you better understand the MySQL database.
MySQL is currently the most popular database management system software used for
managing the relational database. It is open-source database software, which is supported by
Oracle Company. It is fast, scalable, and easy to use database management system in
comparison with Microsoft SQL Server and Oracle Database. It is commonly used in
conjunction with PHP scripts for creating powerful and dynamic server-side or web-based
enterprise applications. It is developed, marketed, and supported by MySQL AB, a Swedish
company, and written in C programming language and C++ programming language. The
official pronunciation of MySQL is not the My Sequel; it is My Ess Que Ell. However, you
can pronounce it in your way. Many small and big companies use MySQL. MySQL supports
many Operating Systems like Windows, Linux, MacOS, etc. with C, C++, and Java languages.
                                              64
8.4.3 WAMPSERVER
WampServer is a Windows web development environment. It allows you to create web
applications with Apache2, PHP and a MySQL database. Alongside, PhpMyAdmin allows you
to manage easily your database.
WAMPServer is a reliable web development software program that lets you create web apps
with MYSQL database and PHP Apache2. With an intuitive interface, the application features
numerous functionalities and makes it the preferred choice of developers from around the
world. The software is free to use and doesn’t require a payment or subscription.
                                              65
8.4.4. BOOTSTRAP 4
Bootstrap is a free and open-source tool collection for creating responsive websites and web
applications. It is the most popular HTML, CSS, and JavaScript framework for developing
responsive, mobile-first websites.
It solves many problems which we had once, one of which is the cross-browser compatibility
issue. Nowadays, the websites are perfect for all the browsers (IE, Firefox, and Chrome) and
for all sizes of screens (Desktop, Tablets, Phablets, and Phones). All thanks to Bootstrap
developers -Mark Otto and Jacob Thornton of Twitter, though it was later declared to be an
open-source project.
Easy to use: Anybody with just basic knowledge of HTML and CSS can start using Bootstrap
Responsive features: Bootstrap's responsive CSS adjusts to phones, tablets, and desktops
Mobile-first approach: In Bootstrap, mobile-first styles are part of the core framework
Browser compatibility: Bootstrap 4 is compatible with all modern browsers (Chrome, Firefox,
Internet Explorer 10+, Edge, Safari, and Opera)
                                             66
8.4.5. FLASK
Flask is a web framework. This means flask provides you with tools, libraries and technologies
that allow you to build a web application. This web application can be some web pages, a blog,
a wiki or go as big as a web-based calendar application or a commercial website.
Flask is often referred to as a micro framework. It aims to keep the core of an application simple
yet extensible. Flask does not have built-in abstraction layer for database handling, nor does it
have formed a validation support. Instead, Flask supports the extensions to add such
functionality to the application.        Although Flask is rather young compared to
most Python frameworks, it holds a great promise and has already gained popularity among
Python web developers. Let’s take a closer look into Flask, so-called “micro” framework for
Python. Flask is part of the categories of the micro-framework. Micro-framework are normally
framework with little to no dependencies to external libraries. This has pros and cons. Pros
would be that the framework is light, there are little dependency to update and watch for
security bugs, cons is that some time you will have to do more work by yourself or increase
yourself the list of dependencies by adding plugins.
                                               67
                                       CHAPTER 9
                                     CONCLUSION
In conclusion, the project introduces a robust solution to enhance data security in cloud
computing. The Cloaking Wall Model, with features like Long-Term Cloaking and
Geolocation-based Cloaking, ensures persistent confidentiality and global consistency. The
Camouflage Data Disguise technique, integrating Chaffing and Winnowing with ChaCha20
encryption, adds an extra layer of defense. The Cloud Consumer Web App's modular design
caters to both administrators and users, offering secure functionalities like user authentication,
data management, and monitoring. The project's testing phase, outlined in the test report,
demonstrates a rigorous approach to quality assurance. The innovative Bot Identification
Mechanism, coupled with the Disguise Data Generator module, adds an intelligent layer to the
security framework. By accurately identifying potential bot activity and simulating non-
compliant data instances, the system actively responds to emerging threats. The Monitoring
and Auditing modules, along with the immediate Alerts and Notifications system, empower
administrators to maintain real-time oversight, respond promptly to policy violations, and
uphold the integrity of the system. Thus the project provides a adaptive solution to evolving
cloud data security challenges, aligning with the demands for secure and privacy-preserving
cloud computing practices.
                                               68
                                     CHAPTER 10
                          FUTURE ENHANCEMENT
The future evolution of the system holds exciting possibilities, with key areas of focus.
Integrating machine learning algorithms stands out as a potential enhancement, enabling
dynamic analysis of access patterns to adeptly respond to evolving security threats. Behavioural
analytics is another avenue, offering a nuanced understanding of user behaviour to distinguish
normal activities from potential risks. Additionally, exploring blockchain integration is on the
horizon, aiming to enhance data integrity and transparency by leveraging the decentralized and
tamper-resistant nature of blockchain technology. These enhancements collectively propel the
system towards a more adaptive, context-aware, and secure future.
                                              69
                                       APPENDIX
A. SOURCE CODE
Packages
import os
import base64
from cryptography.hazmat.backends import default_backend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.kdf.pbkdf2 import PBKDF2HMAC
from cryptography.fernet import Fernet
from Crypto import Random
from flask import Flask, render_template, Response, redirect, request, session, abort, url_for
import mysql.connector
import hashlib
import shutil
from datetime import date
import datetime
import math
from random import randint
from flask_mail import Mail, Message
from flask import send_file
Database Connection
mydb = mysql.connector.connect(
host="localhost",
user="root",
password="",
charset="utf8",
database="cloud_cloaking"
Login
def login():
msg=""
if request.method=='POST':
uname=request.form['uname']
pwd=request.form['pass']
                                              70
cursor = mydb.cursor()
cursor.execute('SELECT * FROM data_owner WHERE owner_id = %s AND password = %s
&& status=1', (uname, pwd))
account = cursor.fetchone()
if account:
session['username'] = uname
return redirect(url_for('upload'))
else:
msg = 'Incorrect username/password!'
Data Owner Registration
def register():
msg=""
mycursor = mydb.cursor()
mycursor.execute("SELECT max(id)+1 FROM data_owner")
maxid = mycursor.fetchone()[0]
now = datetime.datetime.now()
rdate=now.strftime("%d-%m-%Y")
if maxid is None:
maxid=1
if request.method=='POST':
name=request.form['name']
mobile=request.form['mobile']
email=request.form['email']
city=request.form['city']
uname=request.form['uname']
pass1=request.form['pass']
cursor = mydb.cursor()
cursor.execute('SELECT count(*) FROM data_owner WHERE owner_id = %s ', (uname,))
cnt = cursor.fetchone()[0]
if cnt==0:
sql = "INSERT INTO data_owner(id,name,mobile,email,city,owner_id,password,reg_date)
VALUES (%s, %s, %s, %s, %s, %s, %s, %s)"
val = (maxid,name,mobile,email,city,uname,pass1,rdate)
cursor.execute(sql, val)
                                           71
mydb.commit()
print(cursor.rowcount, "Registered Success")
msg="success"
else:
msg='fail'
Upload Files
def upload():
msg=""
act=""
if 'username' in session:
uname = session['username']
mycursor = mydb.cursor()
mycursor.execute('SELECT * FROM data_owner where owner_id=%s',(uname, ))
rr=mycursor.fetchone()
name=rr[1]
now = datetime.datetime.now()
rdate=now.strftime("%d-%m-%Y")
rtime=now.strftime("%H:%M")
if request.method=='POST':
description=request.form['description']
mycursor.execute("SELECT max(id)+1 FROM data_files")
maxid = mycursor.fetchone()[0]
if maxid is None:
maxid=1
if 'file' not in request.files:
flash('No file part')
return redirect(request.url)
file = request.files['file']
file_type = file.content_type
if file.filename == '':
flash('No selected file')
return redirect(request.url)
if file:
fname = "F"+str(maxid)+file.filename
                                               72
filename = secure_filename(fname)
file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename))
bsize=os.path.getsize("static/upload/"+filename)
fsize=bsize/1024
file_size=round(fsize,2)
ff=filename.split('.')
i=0
file_ext=''
for fimg in imgext:
if fimg==ff[1]:
file_ext=img[i]
break
else:
file_ext=img[0]
i+=1
sql = "INSERT INTO
data_files(id,owner_id,description,file_name,file_type,file_size,reg_date,reg_time,file_extens
ion) VALUES (%s, %s, %s, %s, %s, %s, %s, %s, %s)"
val = (maxid,uname,description,filename,file_type,file_size,rdate,rtime,file_ext)
mycursor.execute(sql,val)
mydb.commit()
msg="success"
Share Files
File_id=request.args.get("file_id")
uname=""
msg=""
act = request.args.get('act')
if 'username' in session:
uname = session['username']
mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM data_owner where owner_id=%s",(uname,))
value = mycursor.fetchone()
name=value[1]
now = datetime.datetime.now()
                                              73
rdate=now.strftime("%d-%m-%Y")
mycursor.execute("SELECT * FROM data_user where owner_id=%s",(uname,))
udata = mycursor.fetchall()
mycursor.execute("SELECT count(*) FROM data_user where owner_id=%s",(uname,))
ucnt = mycursor.fetchone()[0]
mycursor.execute("SELECT * FROM data_files where id=%s",(fid,))
fdata = mycursor.fetchone()
fname=fdata[3]
if request.method=='POST':
selected_users=request.form.getlist('uu[]')
for u1 in selected_users:
mycursor.execute("SELECT max(id)+1 FROM data_share")
maxid = mycursor.fetchone()[0]
if maxid is None:
maxid=1
sql = "INSERT INTO data_share(id, owner_id, file_id, username, share_type, share_date)
VALUES (%s, %s, %s, %s, %s, %s)"
val = (maxid, uname, file_id, u1, '1', rdate)
act="success"
mycursor.execute(sql, val)
mydb.commit()
ChaCha20 Encryption
import struct
def yield_chacha20_xor_stream(key, iv, position=0):
"""Generate the xor stream with the ChaCha20 cipher."""
if not isinstance(position, int):
raise TypeError
if position & ~0xffffffff:
raise ValueError('Position is not uint32.')
if not isinstance(key, bytes):
raise TypeError
if not isinstance(iv, bytes):
raise TypeError
if len(key) != 32:
                                                74
raise ValueError
if len(iv) != 8:
raise ValueError
def rotate(v, c):
return ((v << c) & 0xffffffff) | v >> (32 - c)
def quarter_round(x, a, b, c, d):
x[a] = (x[a] + x[b]) & 0xffffffff
x[d] = rotate(x[d] ^ x[a], 16)
x[c] = (x[c] + x[d]) & 0xffffffff
x[b] = rotate(x[b] ^ x[c], 12)
x[a] = (x[a] + x[b]) & 0xffffffff
x[d] = rotate(x[d] ^ x[a], 8)
x[c] = (x[c] + x[d]) & 0xffffffff
x[b] = rotate(x[b] ^ x[c], 7)
ctx = [0] * 16
ctx[:4] = (1634760805, 857760878, 2036477234, 1797285236)
ctx[4 : 12] = struct.unpack('<8L', key)
ctx[12] = ctx[13] = position
ctx[14 : 16] = struct.unpack('<LL', iv)
while 1:
x = list(ctx)
for i in range(10):
quarter_round(x, 0, 4, 8, 12)
quarter_round(x, 1, 5, 9, 13)
quarter_round(x, 2, 6, 10, 14)
quarter_round(x, 3, 7, 11, 15)
quarter_round(x, 0, 5, 10, 15)
quarter_round(x, 1, 6, 11, 12)
quarter_round(x, 2, 7, 8, 13)
quarter_round(x, 3, 4, 9, 14)
for c in struct.pack('<16L', *(
(x[i] + ctx[i]) & 0xffffffff for i in range(16))):
yield c
ctx[12] = (ctx[12] + 1) & 0xffffffff
                                                     75
if ctx[12] == 0:
ctx[13] = (ctx[13] + 1) & 0xffffffff
def chacha20_encrypt(data, key, iv=None, position=0):
"""Encrypt (or decrypt) with the ChaCha20 cipher."""
if not isinstance(data, bytes):
raise TypeError
if iv is None:
iv = b'\0' * 8
if isinstance(key, bytes):
if not key:
raise ValueError('Key is empty.')
if len(key) < 32:
# TODO(pts): Do key derivation with PBKDF2 or something similar.
key = (key * (32 // len(key) + 1))[:32]
if len(key) > 32:
raise ValueError('Key too long.')
return bytes(a ^ b for a, b in
zip(data, yield_chacha20_xor_stream(key, iv, position)))
assert chacha20_encrypt(
b'Hello World', b'chacha20!') == b'\xeb\xe78\xad\xd5\xab\x18R\xe2O~'
assert chacha20_encrypt(
b'\xeb\xe78\xad\xd5\xab\x18R\xe2O~', b'chacha20!') == b'Hello World'
def run_tests():
import binascii
uh = lambda x: binascii.unhexlify(bytes(x, 'ascii'))
for i, (ciphertext, key, iv) in enumerate((
(uh('76b8e0ada0f13d90405d6ae55386bd28bdd219b8a08ded1aa836efcc8b770dc7da41597c5
157488d7724e03fb8d84a376a43b8f41518a11cc387b669'),
uh('0000000000000000000000000000000000000000000000000000000000000000'),
uh('0000000000000000')),
assert chacha20_encrypt(b'\0' * len(ciphertext), key, iv) == ciphertext
print('Test %d OK.' % i)
Set Geo Location
Def shrea_geolocation:
                                               76
mycursor.execute('SELECT * FROM data_user where username=%s',(uname, ))
rr=mycursor.fetchone()
name=rr[1]
owner=rr[2]
ff=open("static/geo.txt","r")
loc=ff.read()
ff.close()
mycursor.execute("SELECT count(*) FROM data_files f,data_share s where s.fid=f.id &&
s.username=%s",(uname,))
c1 = mycursor.fetchone()[0]
if c1>0:
mycursor.execute("SELECT * FROM data_files f,data_share s where s.fid=f.id &&
s.username=%s",(uname,))
dat = mycursor.fetchall()
for d1 in dat:
status=''
if d1[13]==1:
status='1'
if d1[13]==2:
lat1=latitude.split('.')
lt1=lat1[0]
lt11=lat1[1]
lt2=lt11[0:4]
lon1=longitude.split('.')
lo1=lon1[0]
lo2=lon1[1]
mycursor.execute("SELECT * FROM share_location where share_id=%s",(d1[9],))
d33 = mycursor.fetchall()
for d3 in d33:
mycursor.execute("SELECT * FROM geo_location where id=%s",(d3[4],))
d4 = mycursor.fetchone()
g1=d4[2]
geo_location=g1.split('new google.maps.LatLng(')
g21=''.join(geo_location)
                                           77
g22=g21.split('), ')
g23='-'.join(g22)
g24=g23.split('-')
gn=len(g24)-1
i=0
while i<gn:
f1=l1.split('.')
geo1=f1[0]
f2=f1[1]
f3=f2[0:4]
gloc1.append(f3)
Set Time and Date
def share_time:
date_st=''
time_st=''
days_st=''
#between date
sdate=d1[15]
edate=d1[16]
sd1=sdate.split('-')
ed1=sdate.split('-')
import datetime
sdd = datetime.datetime(int(sd1[2]), int(sd1[1]),int(sd1[0]))
cdd = datetime.datetime(int(cd1[2]), int(cd1[1]),int(cd1[0]))
edd = datetime.datetime(int(ed1[2]), int(ed1[1]),int(ed1[0]))
if sdd<cdd<edd:
date_st='1'
else:
date_st='1'
#days
dys=d1[19]
dy=dys.split(',')
x=0
from datetime import datetime
                                              78
dty = datetime.now()
ddy=dty.strftime('%A')
ddr=['Sunday','Monday','Tuesday','Wednesday','Thursday','Friday','Saturday']
i=0
for ddr1 in ddr:
i+=1
if ddr1==ddy:
break
cdy=str(i)
for dy1 in dy:
if cdy==dy1:
x+=1
if x>0:
days_st='1'
def file_download():
fid = request.args.get('fid')
mycursor = mydb.cursor()
mycursor.execute("SELECT * FROM data_files where id=%s",(fid,))
value = mycursor.fetchone()
path="static/upload/"+value[3]
return send_file(path, as_attachment=True)
                                             79
B. SCREENSHOTS
                 80
81
82
83
84
85
86
87
                              BIBILIOGRAPHY
1. J. Gao, H. Yu, X. Zhu and X. Li, "Blockchain-based digital rights management scheme
   via multiauthority ciphertext-policy attribute-based encryption and proxy re-
   encryption", IEEE Syst. J., vol. 15, no. 4, pp. 5233-5244, Dec. 2021.
2. J. Sun, D. Chen, N. Zhang, G. Xu, M. Tang, X. Nie, et al., "A privacy-aware and
   traceable fine-grained data delivery system in cloud-assisted healthcare IIoT", IEEE
   Internet Things J., vol. 8, no. 12, pp. 10034-10046, Jun. 2021.
3. P. Patil and M. Sangeetha, "Blockchain-based decentralized KYC verification
   framework for banks", Proc. Comput. Sci., vol. 215, pp. 529-536, Jan. 2022.
4. P. Sanchol, S. Fugkeaw and H. Sato, "A mobile cloud-based access control with
   efficiently outsourced decryption", Proc. 10th IEEE Int. Conf. Mobile Cloud Comput.
   Services Eng. (MobileCloud), pp. 1-8, Aug. 2022.
5. S. Fugkeaw, "A lightweight policy update scheme for outsourced personal health
   records sharing", IEEE Access, vol. 9, pp. 54862-54871, 2021.
6. S. Qi, W. Wei, J. Wang, S. Sun, L. Rutkowski, T. Huang, et al., "Secure data
   deduplication with dynamic access control for mobile cloud storage", IEEE Trans.
   Mobile Comput., pp. 1-18, 2023.
7. S. Wang, H. Wang, J. Li, H. Wang, J. Chaudhry, M. Alazab, et al., "A fast CP-ABE
   system for cyber-physical security and privacy in mobile healthcare network", IEEE
   Trans. Ind. Appl., vol. 56, no. 4, pp. 4467-4477, Jul. 2020.
8. X. Li, T. Liu, C. Chen, Q. Cheng, X. Zhang and N. Kumar, "A lightweight and
   verifiable access control scheme with constant size ciphertext in edge-computing-
   assisted IoT", IEEE Internet Things J., vol. 9, no. 19, pp. 19227-19237, Oct. 2022.
9. Y. Chen, J. Li, C. Liu, J. Han, Y. Zhang and P. Yi, "Efficient attribute based server-
   aided verification signature", IEEE Trans. Services Comput., vol. 15, no. 6, pp. 3224-
   3232, Nov. 2022.
10. Y. Lin, J. Li, X. Jia and K. Ren, "Multiple-replica integrity auditing schemes for cloud
   data storage", Concurrency Comput. Pract. Exper., vol. 33, no. 7, pp. 1, Apr. 2021.
                                          88
BOOK REFERENCES
 1. "Python Crash Course" by Eric Matthes
 2. "Fluent Python" by Luciano Ramalho
 3. "Automate the Boring Stuff with Python" by Al Sweigart
 4. "Learning MySQL" by Seyed M.M. Tahaghoghi and Hugh E. Williams
 5. "High-Performance MySQL" by Baron Schwartz, Peter Zaitsev, and Vadim Tkachenko
 6. "MySQL Cookbook" by Paul DuBois
 7. "Bootstrap 4 in Action" by Jamie Munro
 8. "Bootstrap Reference Guide" by Aravind Shenoy
 9. "Mastering Bootstrap 4" by Benjamin Jakobus
                                         89
WEB REFERENCES
 1. Mozilla Developer Network (MDN): https://developer.mozilla.org/
 2. W3Schools: https://www.w3schools.com/
 3. Stack Overflow: https://stackoverflow.com/
 4. CSS-Tricks: https://css-tricks.com/
 5. A List Apart: https://alistapart.com/
90