A CONTROLLABLE DATA SELF-DESTRUCTION
SYSTEM FOR UNTRUSTED CLOUD STORAGE
NETWORKS
A PROJECT REPORT
Submitted by
GAAYATHRI.M 411414104005
MAHIN FATHIMA.A 411414104013
NIVETHA.S 411414104015
in partial fulfillment for the award of the degree
of
BACHELOR OF ENGINEERING
IN
COMPUTER SCIENCE AND ENGINEERING
NEW PRINCE SHRI BHAVANI COLLEGE OF ENGINEERING
AND TECHNOLOGY
GOWRIVAKKAM, CHENNAI - 600073
ANNA UNIVERSITY::CHENNAI 600 025
APRIL 2018
ANNA UNIVERSITY: CHENNAI 600 025
BONAFIDE CERTIFICATE
Certified that this project report “A CONTROLLABLE DATA SELF-
DESTRUCTION SYSTEM FOR UNTRUSTED CLOUD STORAGE
NETWORKS IN CLOUD COMPUTING” is the bonafide work of
“M.GAAYATHRI (411414104005), A.MAHIN FATHIMA
(411414104013) and S.NIVETHA (411414104015)” who carried out the
project work under my supervision.
SIGNATURE SIGNATURE
Mrs.D.R.Anita Sofia Liz, M.E., Mrs.P.Kavitha, M.E.,
HEAD OF THE DEPARTMENT ASSISTANT PROFESSOR
Computer Science &Engineering Computer Science &Engineering
New Prince Shri Bhavani College of New Prince Shri Bhavani College of
Engineering and Technology Engineering and Technology
Gowrivakkam, Gowrivakkam,
Chennai-600 073 Chennai-600 073
Submitted for the University Project Viva Voice conducted on___________
INTERNAL EXMAINER EXTERNAL EXAMIER
ACKNOWLEDGEMENT
Protected by the omnipotent powers of the almighty, together with the
blessings of our parents, their unconditional love, support and
encouragement, success has kissed us in completing this project. It is our
dignity to dedicate this project to them.
.
I express my hearty sincerity to our founder Chairman
Mr.K.Loganathan, M.Com., M.Ed., and Vice Chairman Mr.L.Naveen
Prasad, B.E., M.B.A., and our Academic Director Prof.A.Swaminathan,
M.E., for providing their appreciation and facilities which made the
experience a pleasant one.
I take the privilege of expressing my sincere thanks to our beloved
Principal Dr.T.Saravanan, M.E., Ph.D., for granting permission to
undertake the project.
I express my profound gratitude to Mrs.D.R.Anita Sofia Liz, M.E.,
Head of the Department of Computer Science and Engineering, for her
timely suggestions concerning about this project.
It is a great pleasure to express my gratitude and thanks towards my
project guide Mrs.P.Kavitha, M.E., Assistant Professor for her
uninterruptible suggestions and words of improvements regarding this
project, which played a major role in guiding us in my track.
Sincerely, we thank all the faculty members and technical assistants of
the Computer Science and engineering Department for their constant
support, valuable suggestions, encouragement and guidance throughout the
period of study, without which hitting this mark of success would have been
a layman’s dream.
ABSTRACT
Cloud storage is an application of clouds that liberates organizations from
establishing in-house data storage systems. However, cloud storage gives
rise to security concerns. In case of group-shared data, the data face both
cloud-specific and conventional insider threats. Secure data sharing among a
group that counters insider threats of legitimate yet malicious users is an
important research issue. In this paper, we propose the Secure Data Sharing
in Clouds (SeDaSC) methodology that provides: Data confidentiality and
integrity, access control, data sharing (forwarding) without using compute-
intensive re-encryption, insider threat security and forward and backward
access control, One time download, Share Time Expire, Secret Key
Management. The SeDaSC methodology encrypts a file with a single
encryption key. Two different key shares for each of the users are generated,
with the user only getting one share. The possession of a single share of a
key allows the SeDaSC methodology to counter the insider threats. The
other key share is stored by a trusted third party, which is called the
cryptographic server. The SeDaSC methodology is applicable to
conventional and mobile cloud computing environments. This project
implements a working prototype of the SeDaSC methodology and evaluates
its performance based on the time consumed during various operations. We
formally verify the working of SeDaSC by using high-level Petri nets, the
Satisfiability Modulo Theories Library, and a Z3 solver. The results proved
to be encouraging and show that SeDaSC has the potential to be effectively
used for secure data sharing in the cloud.
TABLE OF CONTENTS
CHAPTER TITLE PAGE
NO. NO
BONAFIDE CERTIFICATE ii
ACKNOWLEDGEMENT iii
ABSTRACT iv
LIST OF FIGURES ix
1 INTRODUCTION 1
1.1 Characteristics and Services Model 3
1.2 Services Models 4
1.3 Benefits of cloud computing 5
1.4 Advantages 6
2 SYSTEM ANALYSIS 7
2.1 Existing System 7
2.2 Disadvantages of Existing System 7
2.3 Proposed System: 7
2.4 Advantages of Proposed System 8
3 LITERATURE SURVEY 9
3.1 Oruta: privacy-preserving public auditing for shared data 9
in the cloud
3.2 Cloud migration research: a systematic review 9
3.3 Toward efficient and privacy-preserving computing 10
3.4 Scalable, server-passive, user anonymous timed release
cryptography 10
3.5 Ciphertext-policy attribute-based encryption 11
3.6 Provably secure ciphertext policy 11
3.7 Achieving secure, scalable, and fine-grained data access
control in cloud computing 12
3.8 Sok: secure data deletion 12
4 SYSTEM ARCHITECTURE 13
4.1 System Testing 13
4.2 Type of Testing 14
4.2.1 Unit Testing: 14
4.2.2 Integration Testing 14
4.2.3 Functional Testing 14
4.3 System Testing 15
4.3.1 White Box Testing 15
4.3.2 Black box testing 15
4.3.3 Big bang 16
4.3.4 Top-Down Testing 16
4.3.5 Bottom-Up Testing 16
4.3.6 Unit Testing 17
4.3.7 Test Strategy and approach 17
4.3.8 Test Objectives 17
4.3.9 Features to be testing 17
4.4 Integration Testing 17
4.4.1 Integration test plan 18
4.5 Acceptance Testing 18
4.6 Data Flow Diagram 20
5 MODULES DESPCRIPTION 21
5.1 Modules 21
5.2 Modules Description 21
5.2.1 Authentication and Authorization 21
5.2.2 File Encryption and Data Storing to Cloud 21
5.2.3 File Sharing 22
5.2.4 File Decryption and Download from Cloud 22
5.2.5 File Autolysis of Data and Access Control 22
6 SYSTEM SPECIFICATION 23
6.1 System Requirement 23
6.1.1 Hardware Requirements 23
6.1.2 Software Requirements 23
6.2 Software Environment 23
6.2.1 Java Technology 23
6.2.2 The Java Programming Language 23
6.2.3 The Java Platform 25
6.3 Input Design 42
6.4 Output Design 43
6.5 System Study 44
7 UML DIAGRAMS 47
7.1 Goal 47
7.2 Use Case Diagram 48
7.3 Class Diagram 49
7.4 Sequence Diagram 49
7.5 Activity Diagram 51
7.6 Collaboration diagram 52
7.7 State diagram 52
7.8 Deployment diagram 53
7.9 Component diagram 54
7.10 ER diagram 55
8 CONCLUSION 56
APPENDIX 1 SAMPLE CODING 57
APPENDIX 2 MODULE SCREEN SHOT 68
REFERENCE 77
LIST OF FIGURE
FIGURE TITLE PAGE
NO NO
1.1 STRUCTURE OF CLOUD COMPUTING 2
1.2 CHARACTERISTICS OF CLOUD 4
COMPUTING
1.3 STRUCTURE OF SERVICE MODELS 5
4.1 SYSTEM ARCHITECTURE 13
4.2 DATA FLOW DIAGRAM 20
6.1 JAVA PROGRAMMING LANGUAGE 24
6.2 JAVA COMPILER 25
6.3 JAVA PLATFORM 26
6.4 JAVA IDE 27
6.5 COMPILING AND INTERPRETING JAVA 30
SOURCE CODE
7.1 USE CASE DIAGRAM 48
7.2 CLASS DIAGRAM 49
7.3 SEQUENCE DIAGRAM 50
7.4 ACTIVITY DIAGRAM 51
7.5 COLLABORATION DIAGRAM 52
7.6 STATE DIAGRAM 53
7.7 DEPLOYMENT DIAGRAM 53
7.8 COMPONENT DIAGRAM 54
7.9 ER DIAGRAM 55
CHAPTER 1
INTRODUCTION
Cloud computing is the use of computing resources (hardware and
software) that are delivered as a service over a network. Cloud computing
entrusts remote services with a user's data, software and computation. Cloud
computing consists of hardware and software resources made available on
the Internet as managed third-party services. These services typically
provide access to advanced software applications and high-end networks of
server computers.
Cloud computing is considered as the next step in the evolution of on-
demand information technology which combines a set of existing and new
techniques from research areas such as service-oriented architectures (SOA)
and virtualization. The shared data in cloud servers usually contains user’s
sensitive information (e.g., personal profile, financial data, health records,
etc.) and needs to be well protected. As the ownership of the data is
separated from the administrator, the cloud servers may migrate user’s data
to other cloud servers in outsourcing or share them in cloud searching.
Therefore, it becomes a big challenge to protect the privacy of those shared
data in cloud, especially in cross-cloud and big data environment. In order to
meet this challenge, it is necessary to design a comprehensive solution to
support user-defined authorization period and to provide fine-grained access
control during this period. The shared data should be self-destroyed after the
user-defined expiration time. One of the methods to alleviate the problems is
to store data as a common encrypted form.
Attributebased encryption (ABE) has significant advantages based on
the tradition public key encryption instead of one-to-one encryption because
it achieves flexible one-to-many encryption. ABE scheme provides a
powerful method to achieve both data security and fine-grained access
control.
In the key-policy ABE (KP-ABE) scheme to be elaborated in this
paper, the cipher-text is labeled with set of descriptive attributes. Only when
the set of descriptive attributes satisfies the access structure in the key, the
user can get the plaintext. In general, the owner has the right to specify that
certain sensitive information is only valid for a limited period of time, or
should not be released before a particular time.
Fig1.1 Structure of cloud computing
Timed-release encryption (TRE) provides an encryption service where
an encryption key is associated with a predefined release time, and a receiver
can only construct the corresponding decryption key in this time instance. It
can be used in many applications, e.g., Internet programming contest,
electronic sealed-bid auction, etc.
However, applying the ABE to the shared data will introduce several
problems with regard to time specific constraint and self-destruction, while
applying the TRE will introduce problems with regard to fine-grained access
control. Thus, in this paper, we attempt to solve these problems by using
KP-ABE and adding a constraint of time interval to each attribute in the set
of decryption attributes.
1.1 Characteristics and Services Models
The salient characteristics of cloud computing based on the definitions
provided by the National Institute of Standards and Terminology (NIST) are
outlined below:
On-demand self-service: A consumer can unilaterally provision
computing capabilities, such as server time and network storage, as
needed automatically without requiring human interaction with each
service’s provider.
Broad network access: Capabilities are available over the network
and accessed through standard mechanisms that promote use by
heterogeneous thin or thick client platforms (e.g., mobile phones,
laptops, and PDAs).
Resource pooling: The provider’s computing resources are pooled to
serve multiple consumers using a multi-tenant model, with different
physical and virtual resources dynamically assigned and reassigned
according to consumer demand. There is a sense of location-
independence in that the customer generally has no control or
knowledge over the exact location of the provided resources but may
be able to specify location at a higher level of abstraction (e.g.,
country, state, or data center). Examples of resources include storage,
processing, memory, network bandwidth, and virtual machines.
Rapid elasticity: Capabilities can be rapidly and elastically
provisioned, in some cases automatically, to quickly scale out and
rapidly released to quickly scale in. To the consumer, the capabilities
available for provisioning often appear to be unlimited and can be
purchased in any quantity at any time.
Measured service: Cloud systems automatically control and optimize
resource use by leveraging a metering capability at some level of
abstraction appropriate to the type of service (e.g., storage, processing,
bandwidth, and active user accounts). Resource usage can be
managed, controlled, and reported providing transparency for both the
provider and consumer of the utilized service.
Fig1.2 Characteristics of cloud computing
1.2 Services Models
Cloud Computing comprises three different service models, namely
Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and
Software-as-a-Service (SaaS). The three service models or layer are
completed by an end user layer that encapsulates the end user perspective on
cloud services. The model is shown in figure below. If a cloud user accesses
services on the infrastructure layer, for instance, she can run her own
applications on the resources of a cloud infrastructure and remain
responsible for the support, maintenance, and security of these applications
herself.
Fig 1.3 Structure of service models
1.3 Benefits of cloud computing
1. Achieve economies of scale. Increase volume output or productivity
with fewer people. Your cost per unit, project or product plummets.
2. Reduce spending on technology infrastructure. Maintain easy
access to your information with minimal upfront spending. Pay as you
go (weekly, quarterly or yearly) based on demand.
3. Globalize your workforce on the cheap. People worldwide can
access the cloud, provided they have an Internet connection.
4. Streamline processes. Get more work done in less time with less
people.
5. Reduce capital costs. There’s no need to spend big money on
hardware, software or licensing fees.
6. Improve accessibility. You have access anytime, anywhere, making
your life so much easier!
7. Monitor projects more effectively. Stay within budget and ahead of
completion cycle times.
8. Less personnel training is needed. It takes fewer people to do more
work on a cloud, with a minimal learning curve on hardware and
software issues.
9. Minimize licensing new software. Stretch and grow without the need
to buy expensive software licenses or programs.
10.Improve flexibility. You can change direction without serious
“people” or “financial” issues at stake.
1.4 Advantages
1. Price:Pay for only the resources used.
2. Security: Cloud instances are isolated in the network from other
instances for improved security.
3. Performance: Instances can be added instantly for improved
performance. Clients have access to the total resources of the Cloud’s
core hardware.
4. Scalability: Auto-deploy cloud instances when needed.
5. Uptime: Uses multiple servers for maximum redundancies. In case of
server failure, instances can be automatically created on another
server.
6. Control: Able to login from any location. Server snapshot and a
software library lets you deploy custom instances.
CHAPTER 2
SYSTEM ANALYSIS
2.1 EXISTING SYSTEM
The sharing of data among users is perhaps one of the most engaging
features that motivate cloud storage.
Regarding availability of files, there are a series of cryptographic
schemes which go as far as allowing a third-party auditor to check the
availability of files on behalf of the data owner without leaking
anything about the data, or without compromising the data owner’s
anonymity. The problem will arise when a file is shared to multiple
users.
2.2 DISADVANTAGES OF EXISTING SYSTEM
• Privacy issues.
• Large Amount of space need in Cloud.
• Share original file path link.
• Easy to hack all other files based that sharable link.
• Less Secure.
2.3 PROPOSED SYSTEM
This project proposes a key-policy attribute-based encryption with
time-specified attributes (KP-TSABE), a novel secure data Autolysis
of Data scheme in cloud computing.
In the KP-TSABE scheme, every ciphertext is labeled with a time
interval while private key is associated with a time instant.
The ciphertext can only be decrypted if both the time instant is in the
allowed time interval and the attributes associated with the ciphertext
satisfy the key’s access structure.
This project proposes the Secure Data Sharing in Clouds (SeDaSC)
methodology that provides:
1. Data confidentiality and integrity.
2. Access control.
3. Data sharing (forwarding) without using compute-intensive re-
encryption.
4. Insider threat security.
5. Forward and backward access control.
6. One time download.
7. Share Time Expire.
8. Secret Key Management.
2.4 ADVANTAGES OF PROPOSED SYSTEM
• Security issue will not be there.
• Privacy issues are minimized.
• Reducing the space required to store data in cloud.
• Prevent malicious file share.
• Prevent forward and backward access control.
CHAPTER 3
LITERATURE SURVEY
3.1 ORUTA: PRIVACY-PRESERVING PUBLIC AUDITING FOR
SHARED DATA IN THE CLOUD (2014)
AUTHORS: Boyang Wang, Student Member, IEEE, Baochun Li, Senior
Member, IEEE, and Hui Li, Member, IEEE.
This project proposes a novel privacy-preserving mechanism that
supports public auditing on shared data stored in the cloud. In particular, we
exploit ring signatures to compute verification metadata needed to audit the
correctness of shared data. With our mechanism, the identity of the signer on
each block in shared data is kept private from public verifiers, who are able
to efficiently verify shared data integrity without retrieving the entire file. In
addition, our mechanism is able to perform multiple auditing tasks
simultaneously instead of verifying them one by one.
3.2 CLOUD MIGRATION RESEARCH: A SYSTEMATIC REVIEW
(2013)
AUTHORS: IlianaIankoulova, Maya Daneva
This paper aims to identify, taxonomically classify, and systematically
compare existing research on cloud migration. The research synthesis results
in a knowledge base of current solutions for legacy-to-cloud migration. This
review also identifies research gaps and directions for future research. This
review reveals that cloud migration research is still in early stages of
maturity, but is advancing. It identifies the needs for a migration framework
to help improving the maturity level and consequently trust into cloud
migration. This study also identifies needs for architectural adaptation and
self-adaptive cloud-enabled systems.
3.3 TOWARD EFFICIENT AND PRIVACY-PRESERVING
COMPUTING IN BIG DATA ERA (2014)
AUTHORS: CharithPerera, Rajiv Ranjan, Lizhe Wang, Samee U. Khan,
and Albert Y. Zomaya .
The paper aim to exploit new challenges of big data in terms of
privacy, and devote our attention toward efficient and privacy-preserving
computing in the big data era. Specifically, this first formalize the general
architecture of big data analytics, identify the corresponding privacy
requirements, and introduce an efficient and privacy-preserving cosine
similarity computing protocol as an example in response to data mining's
efficiency and privacy requirements in the big data era.
3.4 SCALABLE, SERVER-PASSIVE, USER ANONYMOUS TIMED
RELEASE CRYPTOGRAPHY (2005)
AUTHORS: CCheng-Kang Chu, Sherman S.M. Chow, Wen-GueyTzeng,
Jianying Zhou, and Robert H. Deng, Senior Member, IEEE
The problem of sending messages into the future, commonly known
as timed release cryptography. Using a bilinear pairing on any Gap Diffie-
Hellman group, this paper solves the scalability and anonymity problem by
giving scalable, server-passive and user-anonymous timed release public-
key encryption schemes allowing precise absolute release time
specifications. The trusted time server in this scheme is completely passive -
no interaction between it and the sender or receiver is needed; it assures the
privacy of a message and the anonymity of both its sender and receiver. The
scheme also has a number of desirable properties including a single form of
update for all users, self-authenticated time-bound key updates, and key
insulation, making it a scalable and appealing solution.
3.5 CIPHERTEXT-POLICY ATTRIBUTE-BASED ENCRYPTION
(2007)
AUTHORS: Dan Boneh, Matthew Franklin.
This paper presents a system for realizing complex access control on
encrypted data that is called Ciphertext-Policy Attribute-Based Encryption.
By using these techniques encrypted data can be kept confidential even if the
storage server is untrusted; moreover, the methods are secure against
collusion attacks. In this system attributes are used to describe a user’s
credentials, and the party encrypting data determines a policy for who
decryption. This method is conceptually closer to traditional access control
methods such as Role-Based Access Control (RBAC).
3.6 PROVABLY SECURE CIPHERTEXT POLICY ABE (2007)
AUTHORS: Cong Wang, Sherman S.M. Chow, Qian Wang, KuiRen, and
Wenjing Lou.
The basic scheme of this project is proven to be chosen plaintext
(CPA) secure under the decisional bilinear Diffie-Hellman (DBDH)
assumption. This project then applies the Canetti-HaleviKatz technique to
obtain a chosen ciphertext (CCA) secure extension using one-time
signatures. The security proof is a reduction to the DBDH assumption and
the strong existential enforceability of the signature primitive. In addition,
hierarchical attributes to optimize and reducing both ciphertext size and
encryption/decryption time while maintaining CPA security is introduced.
Finally, this project proposes an extension in which access policies are
arbitrary threshold trees.
3.7 ACHIEVING SECURE, SCALABLE AND FINE-GRAINED DATA
ACCESS CONTROL IN CLOUD COMPUTING (2010)
AUTHORS: Mikhail J. Atallah, Keith B. Frikken, and Marina Blanton.
This paper addresses this challenging open issue by, defining and
enforcing access policies based on data attributeson one hand, and allowing
the data owner to delegate most of the computation tasks involved in fine-
grained data access control to untrusted cloud servers without disclosing the
underlying data contents on the other hand. This goal is achieve by
exploiting and uniquely combining techniques of attribute-based encryption
(ABE), proxy re-encryption, and lazy re-encryption. This scheme also has
salient properties of user access privilege confidentiality and user secret key
accountability.
3.8 SOK: SECURE DATA DELETION, (2013)
AUTHORS: Shankar Gadhve, Prof.Deveshree Naidu.
Secure data deletion is the task of deleting data irrecoverably from a
physical medium. In this paper existing approaches is organized in terms of
their interfaces to physical media. The paper further present taxonomy of
adversaries differing in their capabilities as well as systematization for the
characteristics of secure deletion approaches. Characteristics include
environmental assumptions, as well as behavioral properties of the approach
such as the deletion latency and physical wear. This also performs
experiments to test a selection of approaches on a variety of file systems and
analyze the assumptions made in practice.
CHAPTER 4
SYSTEM ARCHITECTURE
Fig 4.1 System Architecture
4.1 SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the process of
trying to discover every conceivable fault or weakness in a work product. It
provides a way to check the functionality of components, sub-assemblies,
assemblies and a finished product. It is the process of exercising software
with the intent of ensuring that theSoftware system meets its requirements
and user expectations and does not fail in an unacceptable manner. There are
various types of test. Each test type addresses a specific testing requirement.
4.2 TYPES OF TESTING
4.2.1 Unit testing
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow should
be validated. It is the testing of individual software units of the application.
It is done after the completion of an individual unit before integration. This
is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific
business process, application, and/or system configuration. Unit tests ensure
that each unique path of a business process performs accurately to the
documented specifications and contains clearly defined inputs and expected
results.
4.2.2 Integration testing
Integration tests are designed to test integrated software components
to determine if they actually run as one program. Integration tests
demonstrate that although the components were individually satisfaction, the
combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the
combination of components.
4.2.3 Functional testing
Functional tests provide systematic demonstrations that functions tested
are available as specified by the business and technical requirements, system
documentation, and user manuals.Functional testing is centered on the
following items
Valid Input : Identified classes of valid input must be accepted.
Invalid Input: Identified classes of invalid input must be rejected.
Functions: Identified functions must be exercised.
Output : Identified classes of application outputs must be exercised
Systems/Procedures:Interfacing systems or procedures must be
invoked.
Organization and preparation of functional tests is focused on
requirements, key functions, or special test cases. In addition, systematic
coverage pertaining to identify Business process flows, data fields,
predefined processes, and successive processes must be considered for
testing.
4.3 SYSTEM TESTING
System testing ensures that the entire integrated software system
meets requirements. It tests a configuration to ensure known and predictable
results. An example of system testing is the configuration oriented system
integration test. System testing is based on process descriptions and flows,
emphasizing pre-driven process links and integration points.
4.3.1 White box testing
White Box Testing is a testing in which in which the software tester
has knowledge of the inner workings, structure and language of the software,
or at least its purpose. It is purpose. It is used to test areas that cannot be
reached from a black box level.
4.3.2 Black box testing
Black Box Testing is testing the software without any knowledge of the
inner workings, structure or language of the module being tested. Black box
tests must be written from a definitive source document, such as
specification or requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it. The test
provides inputs and responds to outputs without considering how the
software works.
4.3.3 Big bang
The way this integration testing type works is, most or all of the
modules are integrated together to form nearly a complete system. This is
very similar to system testing as this basically has a whole system before
starting the testing. This is an ideal integration technique for small systems.
4.3.4 Top-down testing
This is where the highest level components are tested first and then
step by step start working downwards (lower components). Top-down
testing mainly requires for the testing team to separate what is important and
what is least important, then the most important modules are worked on first.
The top-down approach is similar to a binary tree approach.
The advantage to this way of testing is that if a prototype is released or
shown then most of the main functionality will already be working. It is also
easy to maintain the code and there will be better control in terms of errors
so most of the errors would be taken out before going to the next stage of
testing.
4.3.5 Bottom-up testing
This type of testing test the way upwards to the higher level
components. The components will be separated into the level of importunacy
and the least important modules will be worked on first, then slowly work
the way up by integrating components at each level before moving upwards.
The code is maintained more easily and there is a more clear structure
of how to do things.
4.3.6 Unit testing
Unit testing is usually conducted as part of a combined code and unit
test phase of the software lifecycle, although it is not uncommon for coding
and unit testing to be conducted as two distinct phases.
4.3.7 Test starategy and approach
Field testing will be performed manually and functional tests will be
written in detail.
4.3.8 Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
4.3.9 Features to be testing
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
4.4 INTEGRATION TESTING
Software integration testing is the incremental integration testing of
two or more integrated software components on a single platform to produce
failures caused by interface defects.
The task of the integration test is to check that components or
software applications, e.g. components in a software system or one step up
software applications at the company level interact without error.
4.4.1 Integration test plan
When producing a test plan it must include the following information to be
effective,
A strategy to use when testing the integrated modules and how the
tests will be conducted.
What will be tested for example software features.
What is the time scale and time management?
Responsibilities, e.g. personnel.
Testing pass and fail condition.
Risk involved
Approval from all important people involved.
Most test plans are approved and worked on with the client so they
may order some changes later on, so a test plan may have to include more
information and it is best to get this approved so no problems are
encountered later on.
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
4.5 ACCEPTANCE TESTING
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system
meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No
defects encountered.
Test Case Reports
Manual Testing (validation)
Test Case Test case Action Expected Actual Status
Name description result result
UserName UserName Click It will Same as Pass
Validation Should be Submit show expected
UpperCase,L button to correspond
owerCase, Register ing
Number/Spe Message
cial Char and based upon
min 8 Chars the input
text
Password Password Click It will Same as Pass
Validation Should be Submit shows expected
Password button to correspond
(Uppercase, Register ing
Lowercase Message
and Number) based upon
the input
text
Email To check Click It will Same as pass
Address whether the Submit shows expected
Verification Email button to correspond
Address Register ing
Correct or Message
not. based upon
the input
text
Upload To check Click It will Same as pass
Action whether the browse shows file expected
Verification image is not button image on
uploaded if it correspond
is ing field
watermarked
4.6 DATA FLOW DIAGRAM
The DFD is also called as bubble chart. It is a simple graphical formalism
that can be used to represent a system in terms of input data to the system,
various processing carried out on this data, and the output data is generated
by this system. The data flow diagram (DFD) is one of the most important
modeling tools. It is used to model the system components.
Fig 4.2 Data Flow Diagram
CHAPTER 5
MODULES DESPCRIPTION
5.1 MODULES
1. Authentication and Authorization
2. File Encryption and Data storing to Cloud.
3. File Sharing
4. File Decryption and Download
5. File Autolysis and Access Control
5.2MODULES DESCRIPTION
5.2.1 Authentication and authorization
In this module the User has to register first, and then only he/she has
access to the database. After registration the user can login to the site. The
authorization and authentication process facilitates the system to protect
itself and besides it protects the whole mechanism from unauthorized usage.
The Registration involves in getting the details of the users who wants to use
this application.
5.2.2 File encryption and data storing to cloud
In this module, User Upload the files which he wants to share. At first
the uploaded files are stored in the Local System. Then the user upload the
file to the real Cloud Storage (In this application, we use Drop box). While
uploading to the Cloud the file got encrypted by using AES (Advanced
Encryption Standard) Algorithm and generates Private Key. Again the
Encrypted Data is converted as Binary Data for Data security and Stored in
Cloud.
5.2.3 File sharing
In this module, the uploaded files are shared to the friends or users. In
this, the Data Owner set the time to expire the data in Cloud. The Private
Key of the Shared Data will be send through Email.
5.2.4 File decryption and download from cloud
In this Module, the user can download the data by decrypting by using
AES (Advanced Encryption Standard) Algorithm. The user should give
corresponding Private Keys to decrypt the data. The data will be deleted if
the user enters the Wrong Private Key for Three times. If the file got deleted
then the intimation email will be sent to the Data owner. The Downloaded
Data will be stored in Local Drive.
5.2.5 File autolysis of data and access control
The Data will be automatically deleted if the User does not download
the file successfully with in the time given by the data owner. If the user
downloads the data, then the File Autolysis will be disabled. If the File got
deleted by File Autolysis scheme, the intimation Email will be sent to Data
Owner. If data owner attach any malicious in our shared file then will
intimate to shared user. In our website to block the backward access.
Example. If a user to logout account then can’t go back our previous page.
CHAPTER 6
SYSTEM SPECIFICATION
6.1 SYSTEM REQUIREMENT
6.1.1 Hardware requirements
Processor : Pentium –III
Speed : 1.1 GHz
RAM : 256 MB (min)
Hard Disk : 20 GB
Floppy Drive : 1.44 MB
Key Board : Standard Windows Keyboard
Mouse : Two or Three Button Mouse
Monitor : SVGA
6.1.2 Software requirements
Operating System : Windows
Front End : Java JDK1.8.2
Front End Tool : NetbeansIDE8.2
Database : Mysql5.0
Connectivity
Database Tool : SQLyog
6.2 SOFTWARE ENVIRONMENT
6.2.1 Java Technology
Java technology is both a programming language and a platform.
6.2.2 The java programming language
The Java programming language is a high-level language that can be
characterized by all of the following buzzwords:
Simple
Architecture neutral
Object oriented
Portable
Distributed
High performance
Interpreted
Multithreaded
Robust
Dynamic
Secure
The Java programming language is unusual in that a program is both
compiled and interpreted. With the compiler, first one can translate a
program into an intermediate language called Java byte codes the platform
independent codes interpreted by the interpreter on the Java platform. The
interpreter parses and runs each Java byte code instruction on the computer.
Compilation happens just once; interpretation occurs each time the program
is executed. The following figure illustrates how this works.
Fig6.1 Java Programming Language
One can think of Java byte codes as the machine code instructions for
theJava virtual machine (Java VM). Every Java interpreter, whether it’s a
development tool or a Web browser that can run applets, is an
implementation of the Java VM. Java byte codes help make “write once, run
anywhere” possible. One can compile the program into byte codes on any
platform that has a Java compiler. The byte codes can then be run on any
implementation of the Java VM. That means that as long as a computer has a
Java VM, the same program written in the Java programming language can
run on Windows 2000, a Solaris workstation, or on an iMac.
Fig 6.2 Java Compiler
6.2.3 The java platform
A platform is the hardware or software environment in which a
program runs. Most platforms can be described as a combination of the
operating system and hardware. The Java platform differs from most other
platforms in that it’s a software-only platform that runs on top of other
hardware-based platforms. The Java platform has two components:
The Java Virtual Machine (Java VM): It’s the base for the Java
platform and is ported onto various hardware-based platforms.
The Java Application Programming Interface(Java API):The Java
API is a large collection of ready-made software components that
provide many useful capabilities, such as graphical user interface
(GUI) widgets. The Java API is grouped into libraries of related
classes and interfaces; these libraries are known as packages. The
following figure depicts a program that’s running on the Java
platform.
Fig 6.3 Java platform
Native code is code that after you compile it, the compiled code runs
on a specific hardware platform. As a platform-independent environment,
the Java platform can be a bit slower than native code. However, smart
compilers, well-tuned interpreters, and just-in-time byte code compilers can
bring performance close to that of native code without threatening
portability.
What Can Java Technology Do?
The most common types of programs written in the Java programming
language are applets and applications. An applet is a program that adheres
to certain conventions that allow it to run within a Java-enabled browser.
The general-purpose, high-level Java programming language is also a
powerful software platform.
An application is a standalone program that runs directly on the Java
platform. A special kind of application known as a server serves and
supports clients on a network. Examples of servers are Web servers, proxy
servers, mail servers, and print servers. Another specialized program is a
servlet. Java Servlets are used for building interactive web applications. The
API support all these kinds of programs with packages of software
components. Every full implementation of the Java platform gives you the
following features
The essentials: Objects, strings, threads, numbers, input and output,
data structures, system properties, date and time, and so on.
Applets: The set of conventions used by applets.
Networking: URLs, TCP, UDP sockets, and IP addresses.
Internationalization: Programs can automatically adapt to specific
locales and be displayed in the appropriate language.
Security: Includes electronic signatures, public and private key
management, access control, and certificates.
Software components: Known as JavaBeansTM, can plug into
existing component architectures.
Object serialization: Allows lightweight persistence and
communication via Remote Method Invocation (RMI).
Java Database Connectivity (JDBCTM): Provides uniform access to
a wide range of relational databases.
Fig 6.4 Java IDE
The Java platform also has APIs for 2D and 3D graphics, accessibility,
servers, collaboration, telephony, speech, animation, and more.
Applications of Java Technology
The various applications of Java technology are the following:
Get started quickly: Although the Java programming language is a
powerful object-oriented language, it’s easy to learn, especially for
programmers already familiar with C or C++.
Write better code: The Java programming language encourages good
coding practices, and its garbage collection helps the user to avoid
memory leaks.
Develop programs more quickly: The development time may be as
much as twice as fast versus writing the same program in C++. It is a
simpler programming language than C++.
Avoid platform dependencies with 100% Pure Java: The program
is portable by avoiding the use of libraries written in other languages.
The 100% Pure JavaTM Product Certification Program has a repository
of historical process manuals, white papers, brochures, and similar
materials online.
Write once, run anywhere: Because 100% Pure Java programs are
compiled into machine-independent byte codes, they run consistently
on any Java platform.
Distribute software more easily: You can upgrade applets easily
from a central server. Applets havethe feature of allowing new classes
to be loaded “on the fly,” without recompiling the entire program.
ODBC
Microsoft Open Database Connectivity (ODBC) is a standard
programming interface for application developers and database systems
providers. The ODBC Administrator in Control Panel can specify the
particular database that is associated with a data source that an ODBC
application program is written to use. When the ODBC icon is installed in
Control Panel, it uses a file called ODBC INST.DLL. It is also possible to
administer your ODBC data sources through a stand-alone program called
ODBC ADM.EXE.
In ODBC the application can be written to use the same set of
function calls to interface with any data source, regardless of the database
vendor. The source code of the application doesn’t change. The operating
system uses the Registry information written by ODBC Administrator to
determine which low-level ODBC drivers are needed to talk to the data
source (such as the interface to Oracle or SQL Server). The loading of the
ODBC drivers is transparent to the ODBC application program. In a
client/server environment, the ODBC API even handles many of the network
issues for the application programmer.
JDBC
JDBC offers ageneric SQL database access mechanism that provides a
consistent interface to a variety of RDBMSs. This consistent interface is
achieved through the use of “plug-in” database connectivity modules, or
drivers. To have JDBC support, the driver must be provided for each
platform that the database and Java run on to gain a wider acceptance of
JDBC, Sun based JDBC’s framework on ODBC. ODBC has widespread
support on a variety of platforms. Basing JDBC on ODBC will allow
vendors to bring JDBC drivers to market much faster than developing a
completely new connectivity solution.
JDBC Goals
JDBC is designed to drive the development of the API. These goals,
in conjunction with early reviewer feedback, have finalized the JDBC class
library into a solid framework for building database applications in Java.
The goals that were set for JDBC are important. They will give some insight
as to why certain classes and functionalities behave the way they do.
Importance of Java to the Internet
Java has had a profound effect on the Internet. This is because; java
expands the Universe of objects that can move about freely in Cyberspace.
In a network, two categories of objects are transmitted between the server
and the personal computer. They are passive information and Dynamic
active programs in the areas of Security and probability. Java has opened the
door to a new form of program called the Applet.
Java Architecture
Java architecture provides a portable, robust, high performing
environment for development. Java provides portability by compiling the
byte codes for the Java Virtual Machine, which is then interpreted on each
platform by the run-time environment.
COMPILATION OF CODE
Java
Pc Java interpreter
compiler Byte
code
Source Macintosh interpreter
code compiler Platform macintosh
indepen
dent
SPARC Java
Compiler interpreter
(SPARC)
Fig6.5 Compiling and Interpreting Java Source Code
)))
When you compile the code, the Java compiler creates machine code
(called byte code) for a hypothetical machine called Java Virtual Machine
(JVM). The JVM is created for the overcoming the issue of probability. The
code is written and compiled for one machine and interpreted on all
machines. This machine is called Java Virtual Machine.
JVM
The Java Virtual Machine is the cornerstone of the Java platform. It is
the component of the technology responsible for its hardware- and operating
system independence, the small size of it is the compiled code, and it has the
ability to protect users from malicious programs. The Java Virtual Machine
is an abstract computing machine. Like a real computing machine, it has an
instruction set and manipulates various memory areas at run time. It is
reasonably common to implement a programming language using a virtual
machine. Oracle's current implementations emulate the Java Virtual
Machine on mobile, desktop and server devices.
A class file contains Java Virtual Machine instructions (or byte codes)
and a symbol table, as well as other Ancillary information. For the sake of
security, the Java Virtual Machine imposes strong syntactic and structural
constraints on the code in a class file. The Java Virtual Machine specified
here is compatible with the Java SE 7 platform, and supports the Java
programming language specified in The Java Language Specification, Java
SE 7 Edition.
Simple
Java was designed to be easy for the Professional programmer to learn
and to use effectively. If you are an experienced C++ Programmer. Learning
Java will oriented features of C++. Most of the confusing concepts from
C++ are either left out of Java or implemented in a cleaner, more
approachable manner. In Java there are a small number of clearly defined
ways to accomplish a given task.
Object oriented
Java was not designed to be source-code compatible with any other
language. This allowed the Java team the freedom to design with a blank
state. One outcome of this was a clean usable, pragmatic approach to
objects. The object model in Java is simple and easy to extend, while simple
types, such as integers, are kept as high-performance non-objects.
Robust
The multi-platform environment of the web places extraordinary
demands on a program, because the program must execute reliably in a
variety of systems. The ability to create robust programswas given a high
priority in the design of Java. Java is strictly typed language; it checks your
code at compile time and runtime. Java virtually eliminates the problems of
memory management and deal location, which is completely automatic.
JSP
The most significant of the many good reasons for this is that it is
amazingly easy to develop sophisticated Web sites with JSPs. JSPs also
provide a great deal of flexibility when generating HTML, through the
ability to create HTML-like custom tags.
High-quality JSP tools are readily available and easy to use. Developers
do not need to buy expensive software or commit to a particular operating
system in order to use JSPs. JSP is a powerful tool to serve even a midsized
Web site without problems. High-quality commercial JSP tools are available
for serving even the most complex and high-traffic Web sites.
The recently released version 2.0 of the JSP Specification provides even
more features that simplify the process of creating Web sites. In addition, a
standard tag library that provides many JSP tags that solve a wide range of
common problems has been released.
CSS
While (X) HTML is used to describe the content in a web page, it is
Cascading Style Sheets (CSS) that describe how you want that content to
look. CSS is the official and standard mechanism for formatting text and
page layouts. CSS also provides methods for controlling how documents
will be presented in media other than the traditional browser on a screen,
such as in print and on handheld devices.Style sheets are also a great tool for
automating production. It also has rules for specifying the non-visual
presentation of documents, such as sound when read by a screen reader.
Servlets
Java Servlets are programs that run on a Web or Application server
and act as a middle layer between a requests coming from a Web browser or
other HTTP client and databases or applications on the HTTP server.Using
Servlets, you can collect input from users through web page forms, present
records from a database or another source, and create web pages
dynamically.
Java Servlets often serve the same purpose as programs implemented
using the Common Gateway Interface (CGI). Servlets execute within the
address space of a Web serverand it is not necessary to create a separate
process to handle each client request. Servlets are platform-independent
because they are written in Java.The full functionality of the Java class
libraries is available to a servlet. It can communicate with applets, databases,
or other software via the sockets and RMI mechanisms.
SERVLET ARCHITECTURE:
Servlets perform the following major tasks:
Read the explicit data sent by the clients (browsers). This includes an
HTML form on a Web page or it could also come from an applet or a
custom HTTP client program.
Read the implicit HTTP request data sent by the clients
(browsers): This includes cookies, media types and compression
schemes the browser understands, and so forth.
Process the data and generate the results: This process may require
talking to a database, executing an RMI or CORBA call, invoking a
Web service, or computing the response directly.
Send the explicit data (i.e., the document) to the clients
(browsers): This document can be sent in a variety of formats,
including text (HTML or XML), binary (GIF images), Excel, etc.
Send the implicit HTTP response to the clients (browsers): This
includes telling the browsers or other clients what type of document is
being returned (e.g., HTML), setting cookies and caching parameters,
and other such tasks.
XML
XML stands for Extensible Markup Language. It is a text-based
markup language derived from Standard Generalized Markup Language
(SGML).XML tags identify the data and are used to store and organize the
data, rather than specifying how to display it like HTML tags, which are
used to display the data. XML is not going to replace HTML in the near
future, but it introduces new possibilities by adopting many successful
features of HTML.
There are three important characteristics of XML that make it useful in a
variety of systems and solutions
XML is extensible: XML allows you to create your own self-
descriptive tags, or language, that suits your application.
XML carries the data, does not present it: XML allows you to store
the data irrespective of how it will be presented.
XML is a public standard: XML was developed by an organization
called the World Wide Web Consortium (W3C) and is available as an
open standard.
XML can work behind the scene to simplify the creation of HTML
documents for large web sites.
XML can be used to exchange the information between organizations
and systems.
XML can be used for offloading and reloading of databases.
XML can be used to store and arrange the data, which can customize
your data handling needs.
XML can easily be merged with style sheets to create almost any
desired output.
Virtually, any type of data can be expressed as an XML document.
IDE
An integrated development environment/interactive development
environment is a software application that provides comprehensive facilities
to computer programmers for software development. An IDE normally
consists of a source code editor, build automation tools and a debugger.
Most modern IDEs offer intelligent code completion feature. Some IDEs
contain a compiler, interpreter, or both. Sometimes a version control
system and various tools are integrated to simplify the construction of a
Graphical User Interface (GUI). Many modern IDEs also have a class
browser, an object browser, and class diagram, for use in object-oriented
software development.
Net beans
Net Beans IDE is the official IDE for Java 8. With its editors, code
analyzers, and converters, you can quickly and smoothly upgrade your
applications to use new Java 8 language constructs, such as lambdas,
functional operations, and method references. Batch analyzers and
converters are provided to search through multiple applications at the same
time, matching patterns for conversion to new Java 8 language constructs.
With its constantly improving Java Editor, many rich features and an
extensive range of tools, templates and samples, Net Beans IDE sets the
standard for developing with cutting edge technologies out of the box.
APACHE TOMCAT SERVER
Tomcat is a Java servlet container and web server. A web server is, of
course, the program that dishes out web pages in response to requests from a
user sitting at a web browser. Apache’s Tomcat provides both Java servlet
and Java Server Pages (JSP) technologies (in addition to traditional static
pages and external CGI programming).
Tomcat can be used stand-alone, but it is often used “behind”
traditional web servers such as Apache http, with the traditional server
serving static pages and Tomcat serving dynamic Servlet and JSP requests.
Between Tomcat and the servlets and JSP code residing on it, there is also a
standard regulating their interaction, servlet and JSP specification, which is
in turn a part of Sun’s J2EE (Java 2 Enterprise Edition).
MY SQL
MySQL is the world's second most widely used open-source relational
database management system (RDBMS. The SQL phrase stands
for Structured Query Language. MySQL is a popular choice of database for
use in web applications, and is a central component of the widely
used LAMP open source web application software stack (and
other 'AMP' stacks). LAMP is an acronym for "Linux, Apache, MySQL, and
Perl/PHP/Python." Free-software-open source projects that require a full-
featured database management system often use MySQL. Applications
which use MySQL databases
include: TYPO3, MODx, Joomla, Wordpress, phpBB, MyBB, Drupal and
other software. MySQL is also used in many high-profile, large-
scale websites, including Google, Facebook, Twitter and YouTube.
SQL YOG
SQLyog is a GUI tool for the RDBMS MySQL. It is developed by
Webyog, Inc. based out of Bangalore, India and Santa Clara, California.
SQLyog is being used by more than 30,000 customers worldwide and has
been downloaded more than 2,000,000 times.
RMI
The Java Remote Method Invocation (Java RMI) is a Java API that
performs the object-oriented equivalent of remote procedure calls (RPC),
with support for direct transfer of serialized Java classes and distributed
garbage collection.
1. The original implementation depends on Java Virtual Machine (JVM)
class representation mechanisms and it thus only supports making
calls from one JVM to another. The protocol underlying this Java-
only implementation is known as Java Remote Method
Protocol (JRMP).
2. In order to support code running in a non-JVM context,
a CORBA version was later developed. Usage of the term RMI may
denote solely the programming interface or may signify both the API
and JRMP, whereas the term RMI-IIOP (read: RMI over IIOP)
denotes the RMI interface delegating most of the functionality to the
supporting CORBA implementation.
SWING
The AWT defines a basic set of controls, windows, and dialog boxes
that support a Usable, but limited graphical interface. The use of native peers
led to several problems.
First, because of variations between operating systems, a component
might look, or even act, differently on different platforms. This
potential variability threatened the philosophy of Java: write once, run
anywhere.
Second, the look and feel of each component was fixed (because it is
defined by the platform) and could not be (easily) changed. Third, the
use of heavyweight components caused some restrictions like a
heavyweight component is always rectangular and opaque.
Swing is built on the foundation of the AWT. Swing also uses the same
event handling Mechanism as the AWT.Swing was created to address the
limitations present in the AWT. It does this through two key features:
lightweight components and a pluggable look and feel. Together they
provide an elegant, yet easy-to-use solution to the problems of the AWT.
These two features define the essence of Swing.
DROP BOX
Drop box is a cloud storage provider (sometimes referred to as an
online backup service) that is frequently used as a file-sharing service. Drop
box is a file hosting service operated by Drop box, Inc., headquartered
in San Francisco, California, that offers cloud storage, file, personal cloud,
and client software. Drop box allows users to create a special folder on their
computers, which Drop box then synchronizes so that it appears to be the
same folder (with the same contents) regardless of which computer is used
to view it. Files placed in this folder are accessible via the folder, or through
the Drop Box website and a mobile app. Drop box provides client software
for Microsoft windows, X, Linux, Android, iOS, BlackBerry OS and web
browsers, as well as unofficial ports toSymbian, Windows Phone,
and MeeGo.
Java Networking
The term network programming refers to writing programs that execute
across multiple devices (computers), in which the devices are all connected
to each other using a network. The java.net package of the J2SE APIs
contains a collection of classes and interfaces that provide the low-level
communication details, allowing you to write programs that focus on solving
the problem at hand. The java.net package provides support for the two
common network protocols
TCP: TCP stands for Transmission Control Protocol, which allows
for reliable communication between two applications. TCP is
typically used over the Internet Protocol, which is referred to as
TCP/IP.
UDP: UDP stands for User Datagram Protocol, a connection-less
protocol that allows for packets of data to be transmitted between
applications.
This tutorial gives good understanding on the following two subjects:
Socket Programming: This is most widely used concept in
Networking and it has been explained in very detail.
URL Processing: This would be covered separately. Click here to
learn about URL Processing in Java language.
Socket Programming
Sockets provide the communication mechanism between two
computers using TCP. A client program creates a socket on its end of the
communication and attempts to connect that socket to a server.When the
connection is made, the server creates a socket object on its end of the
communication. The client and server can now communicate by writing to
and reading from the socket.
The java.net.Socket class represents a socket, and the
java.net.ServerSocket class provides a mechanism for the server program to
listen for clients and establish connections with them. The following steps
occur when establishing a TCP connection between two computers using
sockets:
The server instantiates a Server Socket object, denoting which port
number communication is to occur on.
The server invokes the accept() method of the Server Socket class.
This method waits until a client connects to the server on the given
port.
After the server is waiting, a client instantiates a Socket object,
specifying the server name and port number to connect to.
The constructor of the Socket class attempts to connect the client to
the specified server and port number. If communication is established,
the client now has a Socket object capable of communicating with the
server.
On the server side, the accept() method returns a reference to a new
socket on the server that is connected to the client's socket.
After the connections are established, communication can occur using
I/O streams. Each socket has both an Output Stream and an Input Stream.
The client's Output Stream is connected to the server's Input Stream, and the
client's Input Stream is connected to the server's Output Stream.TCP is a two
way communication protocol, so data can be sent across both streams at the
same time. There are following use full classes providing complete set of
methods to implement sockets.
6.3 INPUT DESIGN
The input design is the link between the information system and the
user. It comprises the developing specification and procedures for data
preparation and those steps are necessary to put transaction data in to a
usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having
people keying the data directly into the system. The design of input focuses
on controlling the amount of input required, controlling the errors, avoiding
delay, avoiding extra steps and keeping the process simple. The input is
designed in such a way so that it provides security and ease of use with
retaining the privacy. Input Design considered the following things
What data should be given as input?
How the data should be arranged or coded?
The dialog to guide the operating personnel in providing input.
Methods for preparing input validations and steps to follow when
error occur.
OBJECTIVES
1. Input Design is the process of converting a user-oriented description of
the input into a computer-based system. This design is important to avoid
errors in the data input process and show the correct direction to the
management for getting correct information from the computerized system.
2.It is achieved by creating user-friendly screens for the data entry to handle
large volume of data. The goal of designing input is to make data entry
easier and to be free from errors. The data entry screen is designed in such a
way that all the data manipulates can be performed. It also provides record
viewing facilities.
3. When the data is entered it will check for its validity. Data can be entered
with the help of screens. Appropriate messages are provided as when needed
so that the user will not be in maize of instant. Thus the objective of input
design is to create an input layout that is easy to follow.
6.4 OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user
and presents the information clearly. In any system results of processing are
communicated to the users and to other system through outputs. In output
design it is determined how the information is to be displaced for immediate
need and also the hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design improves the
system’s relationship to help user decision-making.
1. Designing computer output should proceed in an organized, well thought
out manner.The right output must be developed while ensuring that each
output element is designed so that people will find the system can use easily
and effectively. When analysis design computer output, they should Identify
the specific output that is needed to meet the requirements.
2. Select methods for presenting information.
3. Create document, report, or other formats that contain information
produced by the system.
The output form of an information system should accomplish one or more of
the following objectives.
Convey information about past activities, current status or projections
of the
Future.
Signal important events, opportunities, problems, or warnings.
Trigger an action.
Confirm an action.
6.5 SYSTEM STUDY
Feasibility study
The feasibility of the project is analyzed in this phase and
business proposal is put forth with a very general plan for the project and
some cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.The key
considerations involved in the feasibility analysis are
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
LEGAL FEASIBILITY
SCHEDULE FEASIBILITY
ECONOMICAL FEASIBILITY
This project is carried out to check the economic impact that the
system will have on the organization. The amount of fund that the company
can pour into the research and development of the system is limited. The
expenditures must be justified. Thus the developed system as well within the
budget and this was achieved because most of the technologies used are
freely available. Only the customized products had to be purchased.
TECHNICAL FEASIBILITY
This project is carried out to check the technical feasibility,
that is, the technical requirements of the system. Any system developed must
not have a high demand on the available technical resources. This will lead
to high demands on the available technical resources. This will lead to high
demands being placed on the client. The developed system must have a
modest requirement, as only minimal or null changes are required for
implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the
system by the user. This includes the process of training the user to use the
system efficiently. The user must not feel threatened by the system, instead
must accept it as a necessity. The level of acceptance by the users solely
depends on the methods that are employed to educate the user about the
system and to make him familiar with it. His level of confidence must be
raised so that he is also able to make some constructive criticism, which is
welcomed, as he is the final user of the system.
LEGAL FEASIBILITY
Determines whether the proposed system conflicts with legal
requirements, e.g. a data processing system must comply with the local Data
Protection Acts.
SCHEDULE FEASIBILITY
A project will fail if it takes too long to be completed before it is
useful. Typically this means estimating how long the system will take to
develop, andif it can be completed in a given time period using some
methods like paybackperiod. Schedule feasibility is a measure of how
reasonable the projecttimetable is. Given our technical expertise, are the
project deadlinesreasonable? Some projects are initiated with specific
deadlines. You need todetermine whether the deadlines are mandatory or
desirable.
CHAPTER 7
UML DIAGRAMS
UML stands for Unified Modeling Language. UML is a standardized
general-purpose modeling language in the field of object-oriented software
engineering. The standard is managed, and was created by, the Object
Management Group.
The goal is for UML to become a common language for creating
models of object oriented computer software. In its current form UML is
comprised of two major components: a Meta-model and a notation. In the
future, some form of method or process may also be added to; or associated
with, UML.
The Unified Modeling Language is a standard language for
specifying, Visualization, Constructing and documenting the artifacts of
software system, as well as for business modeling and other non-software
systems.
7.1 GOALS
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so
that they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the
core concepts.
3. Be independent of particular programming languages and
development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
7.2 USE CASE DIAGRAM
A use case diagram in the Unified Modeling Language (UML) is a
type of behavioral diagram defined by and created from a Use-case analysis.
Its purpose is to present a graphical overview of the functionality provided
by a system in terms of actors, their goals (represented as use cases), and any
dependencies between those use cases. The main purpose of a use case
diagram is to show what system functions are performed for which actor.
Roles of the actors in the system can be depicted.
FIG 7.1 Use case diagram
7.3 CLASS DIAGRAM
In software engineering, a class diagram in the Unified Modeling Language
(UML) is a type of static structure diagram that describes the structure of a
system by showing the system's classes, their attributes, operations (or
methods), and the relationships among the classes.
FIG 7.2 Class diagram
7.4 SEQUENCE DIAGRAM
A sequence diagram in Unified Modeling Language (UML) is a kind of
interaction diagram that shows how processes operate with one another and
in what order. It is a construct of a Message Sequence Chart. Sequence
diagrams are sometimes called event diagrams, event scenarios, and timing
diagrams.
FIG 7.3 Sequence diagram
7.5 ACTIVITY DIAGRAM
Activity diagrams are graphical representations of workflows of stepwise
activities and actions with support for choice, iteration and concurrency. In
the Unified Modeling Language, activity diagrams can be used to describe
the business and operational step-by-step workflows
FIG 7.4 Activity diagram
7.6 COLLABORATION DIAGRAM
A collaboration diagram, also called a communication diagram or
interaction diagram, is an illustration of the relationships and interactions
among software objects in the Unified Modeling Language.
FIG 7.5 Collaboration diagram
7.7 STATE DIAGRAM
A state diagram is a type of diagram used in computer science and related
fields to describe the behavior of systems. State diagrams require that the
system described is composed of a finite number of states; sometimes, this
is indeed the case, while at other times this is a reasonable
abstraction.Statechart diagram is one of the five UML diagrams used to
model the dynamic nature of a system. They define different states of an
object during its lifetime and these states are changed by events.
FIG 7.6 State diagram
7.8 DEPLOYMENT DIAGRAM
Deployment diagram is a structure diagram which shows architecture of
the system as deployment (distribution) of software artifacts to deployment
targets.
Fig 7.7 Deployment diagram
Component diagrams are used to describe the components and
deployment diagrams shows how they are deployed in hardware. UML is
mainly designed to focus on the software artifacts of a system. However,
these two diagrams are special diagrams used to focus on software and
hardware components.
7.9 COMPONENT DIAGRAM
In the Unified Modeling Language, a component diagram depicts
how components are wired together to form larger components or software
systems. They are used to illustrate the structure of arbitrarily complex
systems.Composite structure diagram in the Unified Modeling Language
(UML) is a type of static structure diagram that shows the internal
structure of a class and the collaborations that this structure makes
possible
FIG 7.8 Component diagram
7.10 ER DIAGRAM
7.11 An entity–relationship model (ER model for short) describes
interrelated things of interest in a specific domain of knowledge. A
basic ER model is composed of entity types and specifies
relationships that can exist between instances of those entity types.
FIG 7.9 ER Diagram
CHAPTER 8
CONCLUSION
In this project SeDaSC methodology is used, which is a cloud storage
security scheme for group data. The proposed methodology provides data
confidentiality, secure data sharing without re-encryption, access control for
malicious insiders, and forward and backward access control. Moreover, the
SeDaSC methodology provides assured deletion by deleting the parameters
required to decrypt a file.
FUTURE WORK
Since this project is all about sharing files to friends perform computer
actions the project has been designed keeping in mind the future scopes.
What we have aimed and achieved creating is not a product but a tool to a
better automotive environment, a tool can be used to shape many things in
the future, thus this project will give rise to many future modifications
forking in all directions. Some of the near future scopes of this project are as
follows. There are few interesting problems we will continue to study for our
future work. One of them is we can share a file to multi users at a time. This
project uses AES (Advanced Encryption Scheme) to encrypt the Data. In
future we may develop this application using different types of advanced
algorithm for Encryption. This project uses Dropbox as a Cloud Server. In
Future, this system can be enhanced by making the user select the Cloud
Server such as Google Drive, Hostinger, Dropbox, AppBox, etc.
APPENDIX 1
SAMPLE CODING
FileUploadHandler.java
Package com.cryptography;
import com.databaseConnection.DBConnection;
import java.io.File;
import java.io.IOException;
import java.io.PrintWriter;
import java.sql.Connection;
import java.sql.Statement;
import java.util.List;
import javax.servlet.RequestDispatcher;
import javax.servlet.ServletContext;
import javax.servlet.ServletException;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import javax.servlet.http.HttpSession;
import org.apache.commons.fileupload.FileItem;
import org.apache.commons.fileupload.disk.DiskFileItemFactory;
import org.apache.commons.fileupload.servlet.ServletFileUpload;
public class FileUploadHandler extends HttpServlet {
public static String path;
public static String fname;
public static String full_path;
public static String key_value;
protected void processRequest(HttpServletRequest request,
HttpServletResponse response)throws ServletException, IOException {
response.setContentType("text/html;charset=UTF-8");
PrintWriter out = response.getWriter();
try {
} finally {
out.close();
}
}
@Override
protected void doGet(HttpServletRequest request, HttpServletResponse
response)throws ServletException, IOException {
processRequest(request, response);
}
@Override
protected void doPost(HttpServletRequest request, HttpServletResponse
response)throws ServletException, IOException {
String name = null;
HttpSession session=request.getSession();
String UPLOAD_DIRECTORY = "D:/Self Destructing/"
+session.getAttribute("username")+ "/Original Files/";
File file=new File(UPLOAD_DIRECTORY);
if(!file.exists()){
file.mkdirs();
}
String path1="D:/Self
Destructing/"+session.getAttribute("username")+"/Binary Files/";
String path2="D:/Self
Destructing/"+session.getAttribute("username")+"/Encrypted Files/";
String path3="D:/Self
Destructing/"+session.getAttribute("username")+"/Downloaded Files/";
File f1=new File(path1);
File f2=new File(path2);
File f3=new File(path3);
if(!f1.exists()){
f1.mkdirs();
}
if(!f2.exists()){
f2.mkdirs();
}
if(!f3.exists()){
f3.mkdirs();
}
key_value="123456789";
if(ServletFileUpload.isMultipartContent(request)){
try {
List<FileItem>multiparts = new
ServletFileUpload(newDiskFileItemFactory()).parseRequest(request);
for(FileItem item : multiparts){
if(!item.isFormField()){
name = new File(item.getName()).getName();
item.write( new File(UPLOAD_DIRECTORY + File.separator + name));
String path_name=UPLOAD_DIRECTORY+File.separator+name;
ServletContext s=getServletContext();
String FSepa=(String)File.separator;
s.setAttribute("FPath", "UPLOAD_DIRECTORY");
s.setAttribute("FName", "name");
s.setAttribute("Owner", "Admin");
FileEncryptionobj=new FileEncryption();
path=UPLOAD_DIRECTORY;
fname=name;
full_path=UPLOAD_DIRECTORY+File.separator+name;
String uname=(String) session.getAttribute("uid");
obj.localfun(path,fname,full_path,uname,key_value,path1,path2);
}
}
Statement st;
Connection con=DBConnection.getCon();
st=con.createStatement();
int i=st.executeUpdate("insert into upload (username,filepath,filename)
values('"+session.getAttribute("username")+"','"+full_path+"','"+name+"')");
if(i>0){
request.setAttribute("msg", "File Uploaded Successfully");
}
} catch (Exception ex) {
RequestDispatcher rd=request.getRequestDispatcher("errorpage.jsp");
request.setAttribute("msg","Sorry!!! Error Occured");
rd.forward(request, response);
}
}else{
request.setAttribute("msg","Sorry this Servlet only handles file upload
request");
}
request.getRequestDispatcher("userhome.jsp").forward(request, response);
processRequest(request, response);
}
@Override
public String getServletInfo() {
return "Short description";
}// </editor-fold>}
FileEncryption.java
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
package com.cryptography;
//import com.mysql.jdbc.*;
import java.sql.*;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import javax.crypto.Cipher;
import javax.crypto.CipherInputStream;
import javax.crypto.CipherOutputStream;
import javax.crypto.SecretKey;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.DESKeySpec;
import javax.servlet.ServletContext;
import java.lang.UnsupportedOperationException;
import java.sql.DriverManager;
import javax.servlet.ServletContext;
import javax.servlet.http.HttpSession;
/**
* @author jemi java
*/
public class FileEncryption {
public void localfun(String p, String n, String f, String u, String key, String
out_putpath, String out_putpath1) {
Statement st;
try {
String out_put_path=out_putpath+n;
String out_put_path1=out_putpath1+n;
String path=p;
String fname=n;
String full_path=path+"/"+fname.trim();
String uname=u;
FileInputStream fis1 = new FileInputStream(full_path);
FileOutputStream fos1 = new FileOutputStream(out_put_path1);
encrypt(key, fis1, fos1);
FileInputStreamfis = new FileInputStream(full_path);
FileOutputStreamfos = new FileOutputStream(out_put_path);
encrypt(key, fis, fos);
}
catch (Throwable e) {
e.printStackTrace();
}
}
public static void encrypt(String key, InputStream is, OutputStreamos)
throws Throwable {
encryptOrDecrypt(key, Cipher.ENCRYPT_MODE, is, os);
}
public static void decrypt(String key, InputStream is, OutputStreamos)
throws Throwable {
encryptOrDecrypt(key, Cipher.DECRYPT_MODE, is, os);
}
public static void encryptOrDecrypt(String key, int mode, InputStream is,
OutputStreamos) throws Throwable {
DESKeySpecdks = new DESKeySpec(key.getBytes());
SecretKeyFactoryskf = SecretKeyFactory.getInstance("DES");
SecretKeydesKey = skf.generateSecret(dks);
Cipher cipher = Cipher.getInstance("DES");
// DES/ECB/PKCS5 Padding for SunJCE
if (mode == Cipher.ENCRYPT_MODE) {
cipher.init(Cipher.ENCRYPT_MODE, desKey);
CipherInputStreamcis = new CipherInputStream(is, cipher);
doCopy(cis, os);
}
else if (mode == Cipher.DECRYPT_MODE)
{
cipher.init(Cipher.DECRYPT_MODE, desKey);
CipherOutputStreamcos = new CipherOutputStream(os, cipher);
doCopy(is, cos);
}
}
public static void doCopy(InputStream is, OutputStreamos) throws
IOException {
byte[] bytes = new byte[64];
intnumBytes;
while ((numBytes = is.read(bytes)) != -1)
{
os.write(bytes, 0, numBytes);
}
os.flush();
os.close();
is.close();
}
}
StringtoBinary.java
/*
* To change this template, choose Tools | Templates
* and open the template in the editor.
*/
Package com.cryptography;
/**
* @author Arun Kumar
*/
public class StringToBinary {
public static String main(String getVal)
{
String cipher = getVal;
StringBuilder binary = null;
try
{
byte[] bytes = cipher.getBytes();
binary = new StringBuilder();
for (byte b : bytes)
{
int val = b;
for (int i = 0; i < 8; i++)
{
binary.append((val& 128) == 0 ? 0 : 1);
val<<= 1;
}
}
System.out.println("Binary....." + binary.toString());
}
catch (Exception ex)
{
System.out.println(ex);
}
returnbinary.toString();
}
}
TimeHandler.java
/*
* To change this license header, choose License Headers in Project
Properties.
* To change this template file, choose Tools | Templates
* and open the template in the editor.
*/
packagecom.cryptography;
/**
* @author Sql
*/
import com.databaseConnection.DBConnection;
import com.mail.SendMail;
import java.awt.Toolkit;
import java.io.File;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.util.Timer;
import java.util.TimerTask;
public class TimeHandler
{
intSeconds,FileID,Flag;
Toolkit toolkit;
String Username;
Timer timer;
public TimeHandler(intseconds,Stringusername,intfileid) {
toolkit = Toolkit.getDefaultToolkit();
timer = new Timer();
Seconds=seconds;
Username=username;
FileID=fileid;
timer.schedule(new RemindTask(), seconds * 60000);
System.out.println("End");
}
class RemindTask extends TimerTask {
public void run() {
try{
Statement st1,st2;
Connection con=DBConnection.getCon();
st1=con.createStatement();
ResultSet rs1;
rs1=st1.executeQuery("select status from filesharing where
fileid='"+FileID+"'");
rs1.next();
String status=rs1.getString(1);
if(status.equalsIgnoreCase("Not Downloaded")){
st2=con.createStatement();
ResultSet rs2;
rs2=st2.executeQuery("select filename from cloudupload where
fileid='"+FileID+"'");
rs2.next();
String filename=rs2.getString(1);
File f3=new File("D:/Self Destructing/"+Username+"/Original
Files/"+filename);
CloudManupulation.main();
CloudManupulation.delete(f3, "Self Destruction");
Statement ss;
ss=con.createStatement();
ss.executeUpdate("DELETE FROM cloudupload,filesharing USING
cloudupload INNER JOIN filesharing WHERE
cloudupload.fileid="+FileID+" and filesharing.fileid="+FileID);
SendMailobj= new SendMail(Username);
System.out.println("ABC");
}
}
catch(Exception e)
{
System.out.println("error");
}
}
}
}
APPENDIX 2
SCREEN SHOTS
Home
Fig.1 Home page
Sign Up
Fig.2 SignUp page
Table: Users
Table 1
Login
Fig.3 Login page
OTP
Fig.4 OTP Generation
Verify OTP
Fig.5 OTP Verification
Home
Fig.6 Admin Home page
Upload File
Fig.7 Uploading files
Table: Upload
Table 2
Upload to Cloud
Fig.8 Uploaded to Cloud
Private Cloud File
Fig.9 Private Cloud file
Table: Cloud upload
Table 3
Twice Encryption
Fig.10 Twice Encryption
File Share
Fig.11 File Sharing
Table: File sharing
Table 4
File Share Cloud
Fig.12 Share Cloud file
File Download
Fig.13 File Download
Delete File After File Download
Fig.14 File Deleted
With threat file share
Fig.15 File with threat
Download Threat File
Fig.16 Threat Download
Notify Threat File
Fig.17 Notify Threat
Wrong Key
Fig.18 Wrong key notification
Again Wrong Key
Fig.19 Again wrong key notification
File Deleted Unauthorized Access
Fig.20 Deleted unauthorized access
Time Expired
Fig.21 Deleted Time Expired files
REFERENCES
[1]. Wang, B. Li, and H. Li, “Oruta: Privacy-preserving public auditing
for shared data in the cloud,” Cloud Computing, IEEE Transactions on, vol.
2, no. 1, pp. 43–56, 2014.
[2]. J. Xiong, Z. Yao, J. Ma, X. Liu, Q. Li, and J. Ma, “Priam: Privacy
preserving identity and access management scheme in cloud,” KSII
Transactions on Internet and Information Systems (TIIS), vol. 8, no. 1, pp.
282–304, 2014.
[3]. J. Xiong, F. Li, J. Ma, X. Liu, Z. Yao, and P. S. Chen, “A full
lifecycle privacy protection scheme for sensitive data in cloud computing,”
Peerto- Peer Networking and Applications. [Online]. Available:
http://dx.doi.org/10.1007/s12083-014-0295-x
[4]. P. Jamshidi, A. Ahmad, and C. Pahl, “Cloud migration research: A
systematic review,” Cloud Computing, IEEE Transactions on, vol. 1, no. 2,
pp. 142–157, 2013.
[5]. R. Lu, H. Zhu, X. Liu, J. K. Liu, and J. Shao, “Toward efficient and
privacy-preserving computing in big data era,” Network, IEEE, vol. 28, no.
4, pp. 46–50, 2014.
[6]. X. Liu, J. Ma, J. Xiong, and G. Liu, “Ciphertext-policy hierarchical
attribute-based encryption for fine-grained access control of encryption
data,” International Journal of Network Security, vol. 16, no. 4, pp. 351–357,
2014.
[7]. K. G. Paterson and E. A. Quaglia, “Time-specific encryption,” in
Security and Cryptography for Networks. Springer, 2010, pp. 1–16.
[8]. B. Waters, “Ciphertext-policy attribute-based encryption: An
expressive, efficient, and provably secure realization,” Public Key
Cryptography–PKC 2011, pp. 53–70, 2011.
[9]. A. Shamir, “How to share a secret,” Communications of the ACM,
vol. 22, no. 11, pp. 612–613, 1979.