0% found this document useful (0 votes)
5 views38 pages

CSE 1 Solution

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views38 pages

CSE 1 Solution

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 38

Suggestion

Paper Code: PCC-CS 601


Paper Name: Software Engineering

Group – A
(Multiple Choice Type Questions)

1. (i) What is the first step in the software development lifecycle?


a) System Design b) Coding c) System Testing d) Preliminary Investigation and
Analysis

(ii) What does RAD stand for?


a) Rapid Application Document b) Rapid Application Development c) Relative Application
Development d) None of the above

(iii) What is the major drawback of the Spiral Model?

a) Higher amount of risk analysis b) Doesn't work well for smaller projects c) Additional
functionalities are added later on d) Strong approval and documentation control

(iv) Which of the following uses empirically derived formulas to predict effort as a function
of LOC or FP?

a) FP-Based Estimation b) Process-Based Estimation c) COCOMO d) Both FP-Based


Estimation and COCOMO
(v) Which of the following option is correct?

a) The prototyping model facilitates the reusability of components. b) RAD Model facilitates
reusability of components c) Both RAD & Prototyping Model facilitates reusability of
components d) None

(vi) Which one of the following activities is not recommended for software processes in
software engineering?

a) Software Evolution b) Software Verification c) Software Testing &


Validation d) Software designing

(vii) On what basis is plan-driven development different from that of the software
development process?
a) Based on the iterations that occurred within the activities. b) Based on the output, which
is derived after negotiating in the software development process. c) Based on the interleaved
specification, design, testing, and implementation activities. d) All of the above

(viii) The main activity of the design phase of the system life cycle is to?

a) Replace the old system with the new one b) Develop and test the new system c)
Understand the current system d) Propose alternatives to the current system
(ix) A system analyst does not need to consider _____?

a) Technical feasibility b) Economics feasibility c)


Operational feasibility d) None of these
(x) The systems which can preserve and reproduce the knowledge of experts but have a
limited application focus is:

a) Applications b) Expert system c) Benefits and limitations


d) knowledge base

Group – B
(Short Answer Type Questions)
Each of 5 marks

2. Which life cycle model would you follow for developing software for each of the
following applications? Justify your selection of model with the help of an appropriate
reason.
a) A Game
b) A Text editor (2 + 3)

a) A Game

Life Cycle Model: Iterative/Incremental Model (Agile)

Reason: Game development benefits from the iterative nature of Agile because it allows for
continuous feedback and improvement. Games often require frequent testing, user feedback,
and refinement of features and mechanics. An iterative approach enables developers to build
a playable version quickly, test it with users, and make adjustments based on the feedback,
ensuring that the final product is engaging and meets user expectations.

b) A Text Editor

Life Cycle Model: Waterfall Model

Reason: Developing a text editor can be more straightforward and less prone to frequent
changes in requirements compared to a game. The Waterfall model works well here as it
allows for a clear definition of requirements, followed by systematic design, implementation,
and testing phases. Once the features and functionalities are well-defined, the development
process can proceed in a structured manner, ensuring that all specified requirements are met
without the need for constant iteration.

3. What is SRS? Briefly explain the characteristics of a good SRS. (2 + 3)

Software Requirements Specification (SRS) is a comprehensive description of the intended


purpose and environment for software under development. It documents the necessary
requirements to be satisfied by the software system, providing a detailed outline of the
functionalities, constraints, interfaces, and other aspects to guide the development process.

Characteristics of a Good SRS

A good SRS should have the following characteristics:

1. Correctness: The SRS should accurately describe the system to be built. It should
contain all necessary requirements agreed upon by all stakeholders.
2. Unambiguity: The requirements should be stated clearly without any ambiguity.
Each requirement should be interpreted in only one way.
3. Completeness: The SRS should include all significant requirements, including
responses to all possible inputs and conditions, coverage of all software
functionalities, and constraints.
4. Consistency: There should be no conflicting requirements in the SRS. Consistency
must be maintained across all sections of the document.
5. Verifiability: Each requirement should be stated in such a way that it can be verified
through testing, inspection, analysis, or demonstration.

4. a) Why is the intermediate COCOMO expected to give more accurate estimates than
the basic COCOMO?
The basic COCOMO model assumes that effort and development time are
functions of the product size alone. However, a host of other project parameters besides the
product size affect the effort required to develop the product as well as the development time.
Therefore, in order to obtain an accurate estimation of the effort and project duration, the
effect of all relevant parameters must be taken into account. The intermediate COCOMO
model recognizes this fact and refines the initial estimate obtained using the basic COCOMO
expressions by using a set of 15 cost drivers (multipliers) based on various attributes of
software development. That's why intermediate COCOMO
is expected to give more accurate estimates compared to the basic COCOMO

b) Use a schematic diagram to show the order in COCOMO estimation technique for
i) cost
ii) effort
iii) duration
iv) size (3 + 2)
The COCOMO (Constructive Cost Model) estimation technique provides a framework for
estimating the cost, effort, duration, and size of a software project. Below is a schematic
diagram that outlines the order of estimation for these parameters in the COCOMO model.

COCOMO Estimation Technique Schematic Diagram


+--------------------+
| Project Size | (i.e., KLOC - Thousand Lines of
Code)
+--------------------+
|
v
+--------------------+
| Effort | (in person-months)
| (Effort = a * (Size)^b) |
+--------------------+
|
v
+--------------------+
| Duration | (in months)
| (Duration = c * (Effort)^d) |
+--------------------+
|
v
+--------------------+
| Cost | (in monetary units)
| (Cost = Effort * Cost_per_person_month) |
+--------------------+

Explanation:

1. Project Size (Size)


o The size of the software project is estimated first, usually measured in KLOC
(Thousand Lines of Code).
2. Effort
o Based on the size, the effort required is calculated using the formula: Effort =
a * (Size)^b
▪ 'a' and 'b' are constants derived from historical project data and vary
based on the project type (organic, semi-detached, or embedded).
3. Duration
o The duration or schedule time is estimated next using the formula: Duration =
c * (Effort)^d
▪ 'c' and 'd' are also constants that depend on the type of project.
4. Cost
o Finally, the cost is estimated by multiplying the effort by the cost per person-
month: Cost = Effort * Cost_per_person_month
▪ The cost per person-month is determined based on organizational
standards and labor rates.

In this schematic, the estimation starts from the project size, moves on to effort, then
duration, and finally the cost. This order reflects the logical flow of the COCOMO model
where each parameter is dependent on the previous one.
Group – C
(Long Answer Type Questions)
Each of 10 marks

5. a) A project size of 200 KLOC is to be developed. Software development team has


average experience on similar type of projects. The project schedule is not very tight.
Calculate the Effort, development time, average staff size, and productivity of the
project. (1 + 2 + 1 + 1)

b) What are the different types of team structure followed in software projects? Discuss
them briefly. (5)

1. Hierarchical (Functional) Team Organization

• Description: This traditional structure is organized by function, with each team


member reporting to a functional manager (e.g., development, testing, design).
• Advantages: Clear roles and responsibilities, specialized expertise, and a well-
defined chain of command.
• Disadvantages: Can lead to communication gaps between different functions and
slower decision-making.

2. Chief-Programmer Team Organization

• Description: A senior, highly skilled programmer (the chief programmer) leads the
team and makes critical decisions. Other team members assist by implementing the
chief programmer’s designs and instructions.
• Advantages: Strong leadership and clear decision-making, high-quality design and
implementation driven by an experienced expert.
• Disadvantages: Potential bottleneck if the chief programmer becomes a single point
of failure, can limit team members' growth and contribution.

3. Matrix Team Organization

• Description: Combines functional and project-based structures. Team members


report to both a functional manager and a project manager.
• Advantages: Balances technical expertise and project-specific focus, encourages
better resource allocation.
• Disadvantages: Can create confusion and conflict due to dual reporting lines,
potential for resource contention.

4. Egoless Team Organization

• Description: All team members have equal status, and decisions are made
collectively. The focus is on collaboration and knowledge sharing rather than
hierarchy.
• Advantages: Promotes open communication, innovation, and shared responsibility,
leading to higher morale and creativity.
• Disadvantages: Potential for conflict and slower decision-making due to the lack of a
clear leader, may struggle with accountability.

5. Democratic Team Organization

• Description: Team members participate equally in decision-making processes, often


using voting or consensus methods to make decisions.
• Advantages: Encourages active participation and buy-in from all team members,
fostering a sense of ownership and collaboration.
• Disadvantages: Decision-making can be slow and inefficient, potential for conflicts if
consensus is hard to achieve.

6. a) Explain why spiral model is considered as a meta model?


b) Describe the role as a system analyst
c) Explain the disadvantages of prototype model (2 + 3 + 5)

a) Why the Spiral Model is Considered a Meta Model

The Spiral Model is considered a meta-model because it integrates various aspects of


different software development models, such as the Waterfall model, Incremental model, and
Prototyping model, into a comprehensive framework. It is characterized by its iterative
nature, which allows for continuous refinement through repeated cycles (or spirals) of
planning, risk analysis, engineering, and evaluation. Each cycle produces a more refined
version of the software, incorporating feedback and addressing risks progressively.

Key reasons why the Spiral Model is considered a meta-model include:


• Iterative Refinement: Combines the linearity of the Waterfall model with the
iterative nature of Incremental and Prototyping models.
• Risk Management: Emphasizes risk assessment and mitigation at each iteration,
which is a unique feature not explicitly handled in other models.
• Flexibility: Allows for the incorporation of various development practices and
techniques as needed at different stages of the project.
• Customizability: Can be adapted to suit the specific needs and complexities of
different projects, making it a framework that can encapsulate other models.

b) Role of a System Analyst

A system analyst plays a crucial role in the software development process. The primary
responsibilities include:

• Requirements Gathering: Work with stakeholders to identify and document the


functional and non-functional requirements of the system.
• Feasibility Analysis: Assess the technical, operational, and economic feasibility of
proposed systems.
• System Design: Create detailed specifications and design documents that outline the
architecture and functionality of the system.
• Stakeholder Communication: Serve as a bridge between the technical team and non-
technical stakeholders, ensuring clear and effective communication.
• Problem Solving: Analyze and solve problems that arise during the development
process, ensuring that the system meets the specified requirements.
• Validation and Testing: Participate in the validation and testing of the system to
ensure it meets the required standards and performs as expected.
• Documentation: Maintain comprehensive documentation throughout the project
lifecycle to ensure clear understanding and future maintenance.

c) Disadvantages of the Prototype Model

The Prototype Model involves creating a preliminary version of the software to demonstrate
concepts and test functionalities before developing the final product. While it has several
benefits, it also comes with disadvantages:

• Inadequate Analysis: Focus on prototyping can sometimes lead to insufficient


analysis of the complete system requirements, causing issues later in development.
• Misleading Expectations: Stakeholders might mistake the prototype for the final
product and expect all features to be implemented quickly, leading to unrealistic
expectations.
• Incomplete Documentation: The focus on rapid development and iterative
refinement may result in inadequate documentation, making future maintenance and
scaling difficult.
• High Costs: Developing multiple prototypes and iterations can be time-consuming
and costly, especially if the prototypes are discarded and rebuilt frequently.
• Scope Creep: Continuous feedback and changing requirements can lead to scope
creep, where new features are added beyond the initial plan, potentially delaying the
project.
• Quality Concerns: Quick development cycles might lead to compromises in code
quality, testing, and overall system robustness.
7. What is cost benefit analysis? What are the common techniques for cost benefit
analysis?
(5 + 5)

A cost-benefit analysis in project management is a tool to evaluate the costs vs. benefits of an
important project or business proposal. It is a practical, data-driven approach for guiding
organizations and managers in making solid investment decisions. It helps determine if a
project or investment is financially feasible and beneficial for the organization.

A formal CBA identifies and quantifies all project costs and benefits, then calculates the
expected return on investment (ROI), internal rate of return (IRR), net present value (NPV),
and payback period. The difference between the costs and the benefits of moving forward
with the project is then calculated.

In a CBA, costs may include the following:

• Direct costs: These are costs that are directly related to the proposed project or
investment, e.g., materials, labor, and equipment.
• Indirect costs: These are related fixed costs that contribute to bringing the project or
investment to life, e.g., overhead, administrative, or training expenses.
• Opportunity costs: These are the benefits or opportunities foregone when a business
chooses one project or opportunity over others. To quantify opportunity costs, you
must weigh the potential benefits of the available alternatives.
• Future costs: These are costs that may come up later in the project. These costs
depend on certain factors happening, e.g., costs of mitigating potential risks.
• Cost-benefit analysis facilitates a structured cost management process, helping project
managers and company executives prioritize projects and allocate resources
effectively to achieve the organization’s main goals.

Benefits may include:

• Tangible benefits: These are measurable outcomes that can be easily quantified in
monetary terms, e.g., increased revenue or reduced costs.
• Intangible benefits: These benefits are difficult to measure in monetary terms. They
are indirect or qualitative outcomes, such as improved customer satisfaction or
increased employee morale.
Although intangible benefits may be difficult to quantify in financial terms, it is necessary to
factor them in when conducting a CBA, as they still have a significant impact on the overall
value of a project.

some common techniques used in cost-benefit analysis:

1. Net Present Value (NPV): This technique calculates the present value of all future
cash inflows and outflows associated with a project or decision, discounted at an
appropriate discount rate. A positive NPV indicates that the project or decision is
profitable and should be accepted.
2. Benefit-Cost Ratio (BCR): The BCR is calculated by dividing the present value of the
project's benefits by the present value of its costs. A BCR greater than 1 indicates that
the project or decision is economically viable and should be accepted.
3. Internal Rate of Return (IRR): The IRR is the discount rate that makes the NPV of a
project or decision equal to zero. If the IRR is greater than the required rate of return
or the cost of capital, the project or decision is considered acceptable.
4. Payback Period: This technique calculates the length of time it takes for the
cumulative cash inflows from a project or decision to equal the initial investment. A
shorter payback period is generally preferred, as it indicates a quicker return on
investment.
5. Break-Even Analysis: Break-even analysis determines the point at which total costs
equal total benefits, indicating no net loss or gain. It helps identify the minimum
performance required for a project to be viable.

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) What is the purpose of system analysis and design?

a) To develop software b) To improve system efficiency c) To provide training to


employees d) To monitor system performance

(ii) What is the purpose of a data flow diagram?

a) To illustrate the system architecture b) To depict the system inputs and outputs c)
To depict the system data and relationships d) To illustrate the system processes
(iii) Which of the following is not a component of UML (Unified Modeling Language)?

a) Use case diagram b) Activity diagram c) Entity-relationship


diagram d) Class diagram
(iv) What is the purpose of a use case diagram?

a) To depict the system architecture b) To depict the system inputs and


outputs c) To depict the system data and relationships d) To illustrate the
system functionality
(v) What is the purpose of a class diagram?

a) To depict the system inputs and outputs b) To depict the system processes
c) To depict the system data and relationships d) To illustrate the system architecture
(vi) What is the purpose of a data dictionary?

a) To provide a glossary of terms used in the system b) To provide a description of the


system’s data structures c) To provide a list of all the system components
d) To provide a description of the system’s processes
(vii) The worst type of coupling is
a) Data coupling b) Control coupling c) Stamp coupling
d) content coupling
(viii) ER model shows the

a) Static view. b) Functional view c) Dynamic view.


d) All the above.
(ix) The main purpose of integration testing is to find

a) design errors b) analysis errors c) procedure errors


d) interface errors
(x) Which of the following are advantages of using LOC (lines of code) as a size oriented
metric?

a) LOC is easily computed b) LOC is a language dependent measure


c) LOC is a language independent measure d) LOC can be computed before a
design is completed.
Group – B
(Short Answer Type Questions)
Each of 5 marks
2. a) Why is risk analysis important? b) What is the difference between ‘known risk’
and ‘predictable risk’? (2 + 3)

Risk analysis is crucial in software development for several reasons:

1. Identifying Potential Threats: It helps in identifying potential risks and uncertainties


that could impact the success of the project, including technical, financial, and
operational risks.
2. Prioritizing Risks: By analyzing and assessing risks, teams can prioritize them based
on their likelihood and potential impact, allowing for more effective risk management
strategies.
3. Mitigating Negative Impacts: Risk analysis enables teams to develop strategies to
mitigate negative impacts on project objectives, such as schedule delays, budget
overruns, or quality issues.
4. Increasing Stakeholder Confidence: Stakeholders have greater confidence in a
project when risks are identified, analyzed, and managed effectively. It fosters
transparency and trust within the project team and with external stakeholders.
5. Improving Decision Making: Risk analysis provides valuable insights that can
inform decision-making throughout the project lifecycle, helping teams make
informed choices and adapt to changing circumstances.
3. a) Write down three advantages of decision trees over decision tables.
b) Mention two situations when decision tables work best (3 + 2)

a) Advantages of Decision Trees over Decision Tables:

1. Ease of Visualization:
o Decision trees provide a graphical representation of decision-making
processes, making them easier to understand and visualize compared to
decision tables, which are typically presented in tabular form. This visual
clarity can aid in communication and interpretation of complex decision logic.
2. Handling Continuous Variables:
o Decision trees can handle both categorical and continuous variables naturally,
allowing for more flexible modeling of decision-making scenarios. Decision
tables, on the other hand, are more suited for discrete, categorical inputs and
outputs, and may require additional processing for continuous variables.
3. Ability to Capture Non-Linear Relationships:
o Decision trees are capable of capturing non-linear relationships between input
variables and outcomes through recursive partitioning of the data space. This
enables them to model more complex decision boundaries compared to
decision tables, which may struggle to represent non-linear relationships
effectively.

b) Situation When Decision Tables Work Best:

1. Simple Decision-Making Processes:


o Decision tables are well-suited for representing simple decision-making
processes with a small number of inputs and outputs. When the decision logic
is straightforward and can be easily represented in tabular form, decision
tables offer a concise and organized way to document decision rules.
2. Discrete and Categorical Inputs:
o Decision tables are most effective when dealing with discrete, categorical
inputs and outputs. They excel at representing decision rules based on specific
conditions and outcomes, making them suitable for domains where inputs can
be clearly categorized into distinct classes or states.
4. Distinguish between physical DFD and logical DFD with an example each

Physical Data Flow Diagram (DFD):

A physical DFD depicts how data flows through a system at the implementation level,
showing the actual processes, data stores, and external entities involved in the system. It
represents the system as it will be implemented, including hardware components, software
modules, and the physical flow of data between them.

Example of a Physical DFD: Consider an online shopping system. In a physical DFD, you
might represent the actual servers, databases, and network connections involved in the
system. For instance, you could illustrate the process of a user placing an order by showing
the interaction between the web server, the database server storing product information, and
the payment gateway.

Logical Data Flow Diagram (DFD):

A logical DFD focuses on the functional aspects of a system without considering the
implementation details. It abstractly represents the system's processes, data flows, data stores,
and external entities, emphasizing the flow of information and the logical relationships
between components.

Example of a Logical DFD: Continuing with the online shopping system example, a logical
DFD might depict the high-level processes involved in the system, such as "Manage
Inventory," "Process Orders," and "Handle Payments." It would illustrate how data flows
between these processes, including inputs from external entities like customers and outputs to
fulfillment centers and payment processors. The logical DFD would not specify the specific
servers or databases involved but would focus on the functional flow of information within
the system.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. a) What are the differences between fault, failure and error?


b) Write a C function to find the maximum of three integer numbers. Now draw the
control flow graph for that C function. Also find its cyclomatic complexity using
possible methods.
3 + (2 + 2 +3)
6. a) Explain when you use PERT charts and when you use Gantt charts while you are
performing the duties of a project manager.
b) What are ‘baselines’ in relation to Software Configuration Management?
c) What do you mean by CASE? (4 + 4 + 2)

a) When to Use PERT Charts and Gantt Charts as a Project Manager:

• PERT Charts (Program Evaluation and Review Technique):


o PERT charts are useful when dealing with projects that have a high degree of
uncertainty and complexity, such as research and development projects or
large-scale construction projects.
o As a project manager, you would use PERT charts to visualize the critical
path, identify key dependencies, and estimate the duration of tasks based on
optimistic, pessimistic, and most likely time estimates.
o PERT charts help in managing project schedules by identifying potential
bottlenecks and critical activities that could impact project completion.
• Gantt Charts:
o Gantt charts are beneficial for projects with well-defined tasks, clear
dependencies, and known durations, such as software development projects or
manufacturing processes.
o As a project manager, you would use Gantt charts to create a visual timeline of
project activities, allocate resources, and track progress against milestones and
deadlines.
o Gantt charts provide a clear overview of project schedules, allowing you to
manage resources efficiently, identify potential delays, and communicate
project timelines to stakeholders.

In summary, use PERT charts when managing projects with high uncertainty and complexity,
and use Gantt charts for projects with well-defined tasks and known durations.

b) 'Baselines' in Relation to Software Configuration Management:

In software configuration management (SCM), a baseline refers to a snapshot of the software


configuration at a specific point in time. It represents a stable and approved version of the
software, typically used as a reference for future changes and comparisons. Baselines serve
several important purposes in SCM:

• Version Control: Baselines provide a reference point for version control, allowing
developers to track changes made to the software over time and revert to previous
versions if necessary.
• Quality Assurance: Baselines help ensure the quality and consistency of the software
by defining a standard configuration that has undergone testing and validation.
• Change Management: Baselines facilitate change management by establishing a
clear starting point for new development efforts and documenting the state of the
software at key milestones.
• Configuration Management: Baselines are used to manage and control
configuration items (CIs), such as source code, documentation, and executable files,
throughout the software development lifecycle.

Overall, baselines play a critical role in ensuring the integrity, reliability, and traceability of
software configurations in SCM.

c) CASE (Computer-Aided Software Engineering):

CASE refers to the use of computer-based tools and techniques to support various activities
in the software development process, including analysis, design, coding, testing, and
maintenance. CASE tools automate repetitive tasks, provide visual modeling capabilities, and
facilitate collaboration among team members. Some common features of CASE tools include:
• Requirements management
• Diagramming and modeling
• Code generation
• Version control
• Testing and debugging
• Documentation generation

7. a) What are the different levels of testing and their goals?


b) ‘Software does not wear out, but hardware does’. Explain.
c) What problems are likely to occur if a module has low cohesion? (4 + 4 + 2)

a) Different Levels of Testing and Their Goals:

1. Unit Testing:
o Goal: To verify the correctness of individual units or components of the
software, typically at the code level.
o Focus: Identifying defects in code logic, ensuring that each unit functions as
intended, and validating the behavior of individual functions or methods.
2. Integration Testing:
o Goal: To test the interaction between different units or modules when
combined together.
o Focus: Detecting defects in the interfaces and interactions between modules,
ensuring that data flows correctly between components, and validating the
integration of units within the larger system.
3. System Testing:
o Goal: To evaluate the behavior of the entire system as a whole, including its
functionality, performance, and reliability.
o Focus: Verifying that the system meets its specified requirements, validating
its overall functionality from an end-to-end perspective, and identifying any
defects that arise when the system is used in a realistic environment.
4. Acceptance Testing:
o Goal: To determine whether the system satisfies the acceptance criteria and is
ready for deployment to the end-users.
o Focus: Validating that the system meets the user's needs, ensuring that it
aligns with business requirements, and gaining approval from stakeholders for
deployment.

b) ‘Software does not wear out, but hardware does’. Explain.

Software does not wear out in the same way that physical hardware does because software is
not subject to the same types of physical degradation over time. The key differences between
software and hardware in terms of wear and tear are:

• Physical Nature: Hardware components are physical objects made of materials that
degrade over time due to factors such as friction, heat, and exposure to environmental
conditions. In contrast, software consists of digital instructions stored electronically,
which do not degrade physically.
• Maintenance and Updates: Software can be updated, maintained, and patched to fix
bugs, add new features, or improve performance without degradation. On the other
hand, hardware components may need to be replaced entirely if they become worn out
or obsolete.
• Endurance and Lifespan: Hardware components have a limited lifespan determined
by their material properties and usage, and they degrade over time with normal use. In
contrast, software can theoretically last indefinitely if properly maintained and
updated.
• Obsolescence: Hardware components become obsolete as technology advances and
newer, more efficient components are developed. Software, while also subject to
obsolescence as new versions and technologies emerge, can often be updated or
adapted to work with newer hardware.

c) Problems Likely to Occur if a Module Has Low Cohesion:

Low cohesion in a module means that the elements within the module are loosely related and
do not contribute to a single, well-defined purpose. This can lead to several problems:

1. Difficulty in Understanding and Maintenance: Modules with low cohesion are


harder to understand and maintain because their functionality is spread across
multiple unrelated tasks. Developers may struggle to identify the purpose of the
module and make changes without unintended side effects.
2. Increased Complexity: Low cohesion results in increased complexity within the
module, making it more error-prone and harder to test. Complex modules are difficult
to debug and may hide subtle bugs that are challenging to detect.
3. Dependency Issues: Low cohesion often leads to tight coupling between modules,
where changes in one module require modifications in others. This increases the risk
of ripple effects throughout the system and makes it harder to implement changes
without impacting other parts of the software.
4. Reduced Reusability: Modules with low cohesion are less reusable because their
functionality is specific to a narrow set of tasks. Developers are less likely to be able
to extract and reuse components from a low-cohesion module in other parts of the
system or in future projects.

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) Aggregation represents

a) is_a relationship b) part_of relationship c) composed_of


relationship d) none of above

(ii) If P is risk probability, L is loss, then Risk Exposure (RE) is computed as


a) RE = P/L b) RE = P + L c) RE = P*L
d) RE = 2* P *L

(iii) A fault simulation testing technique is

a) Mutation testing b) Stress testing c) Black box testing


d) White box testing

(iv) Which of the following statements is true

a) Abstract data types are the same as classes b) Abstract data types do not allow
inheritance c) Classes cannot inherit from the same base class
d) Object have state and behavior

(v) In the spiral model ‘risk analysis’ is performed

a) In the first loop b) in the first and second loop c) In every loop
d) before using spiral model

(vi) Each time a defect gets detected and fixed, the reliability of a software product

a) increases b) decreases. c) remains constant.


d) cannot say anything.

(vii) In function point analysis, number of general system characteristics used to rate the system are

a) 10 b) 14 c) 20 d) 12
(viii) Requirements can be refined using

a) The waterfall model b) prototyping model c) the evolutionary


model d) the spiral model

(ix) Software consists of

a) Set of instructions + operating procedures b) Programs + documentation + operating


procedures c) Programs + hardware manuals d) Set of programs

(x) FAST stands for

a) Functional Application Specification Technique b) Fast Application Specification


Technique c) Facilitated Application Specification Technique
d) None of the above

Group – B
(Short Answer Type Questions)
Each of 5 marks
2. Explain about software quality assurance

Software Quality Assurance (SQA) is a systematic process designed to ensure that software
products and processes meet defined quality standards and perform as expected. SQA
encompasses a variety of activities throughout the software development lifecycle, including
process monitoring, standards compliance, and testing. The primary goal of SQA is to
identify and address defects early in the development process, thereby improving the
reliability, performance, and usability of the final product.

SQA involves both preventive and corrective measures. Preventive measures focus on
improving the development process to reduce the likelihood of defects, such as implementing
coding standards, conducting code reviews, and using automated testing tools. Corrective
measures involve detecting and fixing defects through activities like debugging and user
acceptance testing.

By integrating SQA practices, organizations can minimize risks, reduce costs associated with
post-release defects, and ensure that the software meets both functional and non-functional
requirements, ultimately leading to higher customer satisfaction.

3. What are software validation and verification?

Software validation and verification are two critical components of the software quality
assurance process that ensure a software product meets its requirements and specifications.

• Verification: Verification is the process of evaluating whether the software conforms


to its specified requirements and design. It is a static process that involves activities
such as reviews, inspections, and walkthroughs of documents, code, and models. The
primary goal of verification is to ensure that the software is being built correctly
according to the design specifications and standards.
• Validation: Validation is the process of evaluating the final software product to
ensure it meets the user's needs and requirements. It is a dynamic process that
involves executing the software and conducting various types of testing, such as
system testing, integration testing, and user acceptance testing. The primary goal of
validation is to ensure that the right product is being built and that it performs in real-
world conditions as expected.

Together, verification and validation help ensure that the software is both built correctly and
fulfills its intended purpose, leading to higher quality and more reliable software products.

4. What are white box and black box testing?

White box testing and black box testing are two fundamental approaches to software testing,
each with distinct techniques and focuses.

• White Box Testing: White box testing, also known as clear box or structural testing,
involves testing the internal structures or workings of an application. Testers have
access to the source code and use their knowledge of the code structure, algorithms,
and logic to design test cases. Techniques include statement coverage, branch
coverage, path coverage, and unit testing. The goal is to ensure that the internal
operations are performing as expected and to identify any hidden errors or security
vulnerabilities. This type of testing is typically performed by developers or testers
with programming knowledge.
• Black Box Testing: Black box testing, also known as behavioral or functional testing,
focuses on testing the software's functionality without any knowledge of the internal
code structure. Testers evaluate the software based on the inputs provided and the
outputs produced, ensuring it behaves according to the specified requirements.
Techniques include equivalence partitioning, boundary value analysis, decision table
testing, and use case testing. The goal is to validate the external behavior of the
software, making it suitable for end-user acceptance testing. This type of testing is
usually performed by quality assurance professionals or end-users who interact with
the software as a black box.

Both approaches are essential for comprehensive software testing, ensuring both internal code
integrity and external functionality.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. a) Distinguish between a Data Flow Diagram and a flow chart.

b) What is meant by stub? What is a driver? In which testing are they required?
Explain briefly. (4 + 6)

Stub:

A stub is a piece of code used in software testing to simulate the behavior of a module or
component that a module under test depends on. Stubs are typically used when the actual
module is not yet developed, unavailable, or impractical to use during testing. The stub
provides the necessary responses to the calls made by the module under test, allowing the test
to proceed without the actual dependent module.

Example: If module A calls module B, and module B is not yet implemented, a stub for
module B would be created to return pre-defined responses to module A's calls, enabling the
testing of module A independently.

Driver:

A driver is a piece of code used in software testing to simulate the behavior of a module that
interacts with the module under test from a higher level. Drivers are typically used when the
module under test is a lower-level module and the higher-level controlling modules are not
yet developed or available. The driver calls the functions of the module under test and
provides the necessary input data.

Example: If module A depends on module B, and module A is not yet implemented, a driver
for module A would be created to call module B's functions with the required input data,
enabling the testing of module B independently.

Testing Context:

Stubs and drivers are primarily required in Integration Testing, specifically in Incremental
Integration Testing approaches such as Top-Down and Bottom-Up integration testing.

• Top-Down Integration Testing:


o Stubs are used to simulate the lower-level modules that are called by the
higher-level modules being tested.
o For example, if a top-level module is tested and it calls lower-level modules
that are not yet developed, stubs will be used to mimic those lower-level
modules.
• Bottom-Up Integration Testing:
o Drivers are used to simulate the higher-level modules that invoke the lower-
level modules being tested.
o For example, if a lower-level module is ready for testing but its higher-level
modules are not, drivers will be used to call the lower-level modules and
provide input.

6. Consider Roxy Roll center, a restaurant near College Street, Kolkata, owned by
Saurav. Some are convinced that its Egg-Chicken Rolls are the best in College Street.
Many people, especially Presidency University students and faculties, frequently eat at
Roxy. The restaurant uses an information system that takes customer orders, sends the
orders to the kitchen, monitors goods sold and inventory and generates reports for
management.
Draw the context diagram and Level 1 DFD for the Roxy’s food ordering system. Also
draw a level 2 DFD that will show the decomposition of any one process from level 1
DFD.
(3 + 4 + 3)
7. Write short notes on
i) UML diagrams
ii) Integration testing and load testing

i) UML diagrams

Unified Modeling Language (UML) diagrams are a set of graphical notations used to create
abstract models of software systems. UML is a standardized modeling language that helps in
visualizing, specifying, constructing, and documenting the artifacts of software systems.
UML diagrams can be broadly categorized into two types: structural diagrams and behavioral
diagrams.

• Structural Diagrams: These diagrams represent the static aspects of the system.
o Class Diagram: Shows the classes in the system, their attributes, operations,
and the relationships between the classes.
o Object Diagram: Represents a snapshot of the objects in the system and their
relationships at a specific point in time.
o Component Diagram: Depicts the components of the system and how they
are wired together.
o Deployment Diagram: Illustrates the physical deployment of artifacts on
nodes.
o Package Diagram: Organizes classes into packages, showing dependencies
between packages.
• Behavioral Diagrams: These diagrams represent the dynamic aspects of the system.
o Use Case Diagram: Represents the functionality of the system from a user
perspective, showing actors and use cases.
o Sequence Diagram: Shows object interactions arranged in time sequence,
highlighting how objects communicate.
o Activity Diagram: Illustrates the workflow or activities within the system.
o State Diagram: Describes the states an object goes through and the transitions
between these states.
o Collaboration Diagram: Focuses on the structural organization of objects
that send and receive messages.

ii) Integration Testing and Load Testing:

• Integration Testing: Integration testing is a level of software testing where individual


units or components are combined and tested as a group. The primary goal is to
identify defects in the interaction between integrated components. Integration testing
can be performed using various approaches:
o Top-Down Integration Testing: Testing starts from the top of the module
hierarchy and progresses downward, using stubs to simulate lower-level
modules.
o Bottom-Up Integration Testing: Testing begins with the lower-level modules
and progresses upward, using drivers to simulate higher-level modules.
o Sandwich (Hybrid) Testing: A combination of top-down and bottom-up
approaches, testing is conducted both upward and downward simultaneously.
o Big-Bang Integration Testing: All components are integrated
simultaneously, and the entire system is tested as a whole, which can make it
challenging to isolate defects. Integration testing ensures that different parts of
the application work together correctly and helps in identifying interface-
related issues early in the development cycle.
• Load Testing: Load testing is a type of non-functional testing that evaluates how a
system performs under a specific expected load. The primary objective is to identify
performance bottlenecks and determine whether the application can handle the
anticipated number of concurrent users or transactions.
o Key Metrics in Load Testing: Response time, throughput, resource
utilization (CPU, memory, disk, network), and error rates.
o Approach: Simulate a variety of load conditions (normal, peak, and stress) to
understand the system's behavior and capacity limits. Load testing helps
ensure that the application can maintain acceptable performance levels under
real-world usage conditions, providing insights into scalability and reliability.
It is crucial for applications expected to serve a large number of users, such as
web applications, e-commerce platforms, and online services.

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) The feature of the object oriented paradigm which helps code reuse is

a) object. b) class. c) inheritance. d)


aggregation.

(ii) All activities lying on critical path have slack time equal to

a) 0 b) 1 c) 2 d) None of above
(iii) Alpha and Beta Testing are forms of

a) Acceptance testing b) Integration testing c) System Testing


d) Unit testing
(iv) An object encapsulates

a) Data b) Behaviour c) State d) Both


Data and behaviour
(v) Site for Alpha Testing is

a) Software Company b) Installation place c) Any where


d) None of the above
(vi) Which is not a size metric?

a) LOC b) Function count c) Program length


d) Cyclomatic complexity
(vii) As the reliability increases, failure intensity
a) decreases b) increases c) no effect d)
none of the above
(viii) Which of these terms is a level name in the Capability Maturity Model?

a) Ad hoc b) Repeatable c) Reusable d)


Organized
(ix) FP-based estimation techniques require problem decomposition based on

a) information domain values b) project schedule c) software


functions d) process activities
(x) If the objects focus on the problem domain, then we are concerned with

a) Object Oriented Analysis. b) Object Oriented Design c)


Object Oriented Analysis & Design d) None of the above

Group – B
(Short Answer Type Questions)
Each of 5 marks

2. What are the factors affecting coupling? What is relationship between coupling and
cohesion? (2 + 3)

Factors affecting coupling include:

1. Type of Interaction: The nature of the interaction between modules, such as data
coupling, control coupling, or content coupling, impacts the level of coupling. Data
coupling, where only data is shared, is preferred over control or content coupling.
2. Interface Complexity: The complexity of the module interfaces, including the
number and types of parameters, can increase or decrease coupling. Simpler interfaces
usually result in lower coupling.
3. Module Communication: The method of communication between modules, whether
direct calls, shared data, or message passing, influences coupling. Direct calls and
shared data tend to increase coupling.
4. Change Propagation: The likelihood that changes in one module will necessitate
changes in another module is a significant factor. High change propagation indicates
higher coupling.

Relationship:

• Inverse Relationship: Generally, there is an inverse relationship between coupling


and cohesion. As cohesion increases, coupling tends to decrease, and vice versa. High
cohesion within a module typically reduces the need for the module to interact
frequently with other modules, leading to lower coupling.
• Design Quality: For a well-designed software system, the goal is to achieve high
cohesion and low coupling. High cohesion ensures that modules are focused and
manageable, while low coupling ensures that modules can be modified or replaced
with minimal impact on other modules.
• Maintainability and Reusability: Systems with high cohesion and low coupling are
more maintainable and reusable. High cohesion facilitates understanding and
managing the module's functionality, whereas low coupling allows modules to be
reused in different contexts without requiring significant changes.

3. What is formal technical review (FTR)? What are the differences between fault,
failure and error? (2 + 3)

Formal Technical Review (FTR):

A Formal Technical Review (FTR) is a structured and organized process in which a software
product or its components are examined by a team of reviewers to identify defects and ensure
adherence to standards and requirements. The primary objectives of an FTR are to improve
software quality, verify that the software meets its requirements, and ensure that the
development process is being followed correctly.

4. a) Explain the term “blocking state”


b) Explain the format of data dictionary (2 + 3)

a) Blocking State:
In the context of operating systems and multithreading, the term "blocking state" refers to a
situation where a process or thread is unable to continue execution until a specific event or
condition is met. This state occurs when a process requests a resource that is not currently
available or waits for an event that has not yet occurred.

Key Points:

• Waiting for Resources: A process may enter the blocking state when it needs to
access a resource, such as I/O devices, files, or network connections, which are
currently in use or unavailable.
• Synchronization: In multithreading, a thread may block while waiting for a
synchronization primitive, such as a mutex, semaphore, or condition variable, to be
released.
• Event Waiting: Processes or threads can block while waiting for specific events, such
as user input, signals, or inter-process communication messages.
• State Transition: When the required resource becomes available or the awaited event
occurs, the process or thread transitions from the blocking state to the ready state,
where it can resume execution.

b) Format of Data Dictionary:

A data dictionary is a structured repository of information about data elements in a system. It


provides detailed descriptions and metadata for data items, facilitating consistent data usage
and understanding across the system. The format of a data dictionary typically includes the
following elements:

1. Data Element Name: The unique identifier or name of the data item.
2. Data Type: The type of data (e.g., integer, float, string, date).
3. Description: A brief explanation of the data element and its purpose.
4. Length/Size: The size or length of the data element (e.g., maximum number of
characters for a string).
5. Default Value: The initial value assigned to the data element if no other value is
provided.
6. Constraints: Any rules or restrictions on the data element, such as range limits,
allowed values, or validation criteria.
7. Source: The origin of the data element, such as the source system or data entry point.
8. Relationships: Information about how the data element relates to other data elements,
including foreign key references and dependencies.
9. Owner: The person or role responsible for the data element.
10. Usage: Details on how and where the data element is used within the system.
11. Example Values: Sample data values for illustration and better understanding.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. a) What do you understand by software reliability ?


b) Define the following terms: MTTF, MTTBR, ROCOF. (4 + 6)

a) Software Reliability

Software reliability refers to the probability that a software system will function without
failure under specified conditions for a given period of time. It is a critical aspect of software
quality and reflects the dependability of the software. High reliability indicates that the
software is less likely to fail and can be trusted to perform its intended functions accurately
and consistently. Software reliability is influenced by factors such as the complexity of the
code, the quality of the design, the thoroughness of testing, and the effectiveness of error
detection and correction mechanisms.

Key Aspects of Software Reliability:

• Error-Free Operation: The ability of the software to perform without encountering


errors or failures.
• Performance Consistency: Consistent performance over time and across different
environments and conditions.
• User Confidence: Building user trust by ensuring the software meets expectations
and requirements reliably.
• Measurement: Often measured using metrics such as Mean Time To Failure
(MTTF), Mean Time Between Failures (MTBF), and Rate of Occurrence of Failures
(ROCOF).

b) Definitions of MTTF, MTTBR, and ROCOF

1. MTTF (Mean Time To Failure):


o Definition: MTTF is the average time a software system or component
operates before it experiences a failure. It is a measure of reliability and is
often used for systems that are not repaired after failure (non-repairable
systems).
o Formula: MTTF = Total operational time / Number of failures
o Example: If a software system operates for 1000 hours and experiences 2
failures during that period, the MTTF would be 500 hours.
2. MTTBR (Mean Time To Repair):
o Definition: MTTBR (also commonly referred to as MTTR - Mean Time To
Repair) is the average time taken to repair a software system or component
after a failure has occurred. It measures the maintainability of the software and
indicates how quickly a system can be restored to operational status.
o Formula: MTTR = Total downtime / Number of repairs
o Example: If it takes a total of 10 hours to repair a system that fails 5 times, the
MTTBR would be 2 hours.
3. ROCOF (Rate of Occurrence of Failures):
o Definition: ROCOF, also known as the failure intensity, is the frequency with
which failures occur in a software system over a specified time period. It is
typically measured in failures per unit of time (e.g., failures per hour).
o Formula: ROCOF = Number of failures / Total operational time
o Example: If a system experiences 3 failures over 100 hours of operation, the
ROCOF would be 0.03 failures per hour.
6. Design a : White Box' Test suite for the following code :

int gcd (int x, int y)

{ while (x! = y)

{ if ( x > y )

x = x – y;

else
Y=y–x;

return x

}
The suite should include control flow graph, independent paths,
cyclomatic complexity (using two different techniques). Define
cyclomatic complexity. (8 + 2)
Cyclomatic Complexity is a software metric used to measure the complexity of a program's
control flow. It quantifies the number of linearly independent paths through a program's
source code. In other words, it represents the number of decision points or branches within
the code, indicating the number of possible paths that can be taken during program execution.

The cyclomatic complexity of a program is calculated using the control flow graph, where
nodes represent individual statements or decision points, and edges represent the flow of
control between these statements. The formula to compute cyclomatic complexity is:

V(G)=E−N+2P

Where:

• V(G)is the cyclomatic complexity of the program.


• E is the number of edges in the control flow graph.
• N is the number of nodes in the control flow graph.
• P is the number of connected components (regions) of the graph.

7.

Group – A
(Multiple Choice Type Questions)
Each of 1 marks

1. (i) Changes made to an information system to add the desired but not necessarily the
required features is called

a) Preventative maintenance b) Adaptive maintenance


c) Corrective maintenance d) Perfective maintenance

(ii) All the modules of the system are integrated and tested as complete system in the case of

a) Bottom up testing b) Top-down testing c) Sandwich


testing d) Big-Bang testing
(iii) The tools that support different stages of software development life cycle are called:

a) CASE Tools b) CAME tools c) CAQE tools


d) CARE tools
(iv) Structured charts are a product of

a) requirements gathering b) requirements analysis c)


design d) coding
(v) The problem that threatens the success of a project but which has not yet happened is
a) bug b) error c) risk d) failure
(vi) Pseudocode can replace

a) flowcharts b) structure charts c) decision tables


d) cause-effect graphs
(vii) If a program in its functioning has not met user requirements is some way, then it is

a) an error. b) a failure. c) a fault.


d) a defect.
(viii) Which is not a step of requirement engineering?

a) Requirements elicitation b) Requirements analysis c)


Requirements design d) Requirements documentation
(ix) An object encapsulates

a) Data b) Behaviour c) State d) Both


Data and behaviour
(x) The problem that threatens the success of a project but which has not yet happened is a

a) bug b) error c) risk d) failure

Group – B
(Short Answer Type Questions)
Each of 5 marks

2. What are the major components of SRS?

Major Components of SRS (Software Requirements Specification):

• Introduction: Provides an overview of the document, its purpose, scope, and


references.
• Overall Description: Describes the general factors affecting the product and its
requirements, including product perspective, functions, user characteristics, and
operating environment.
• Specific Requirements: Details the functional and non-functional requirements of
the software, including system features, external interfaces, performance
requirements, and quality attributes.
• External Interface Requirements: Specifies the requirements for interfaces with
other systems or components, including hardware, software, and communication
protocols.
• System Features: Describes the functional capabilities and characteristics of the
software, typically organized into a hierarchical structure or use case model.
• Non-Functional Requirements: Includes constraints, quality attributes, and
performance requirements such as reliability, security, usability, and scalability.
• Appendices: Supplementary information, such as glossary, references, and
supporting documentation.
3. What are the different methods of information elicitation?

Different Methods of Information Elicitation:

• Interviews: Direct communication with stakeholders to gather information about


their needs, expectations, and requirements.
• Questionnaires/Surveys: Written surveys distributed to stakeholders to collect
structured responses and feedback on specific topics or requirements.
• Workshops/Focus Groups: Group sessions involving stakeholders to brainstorm
ideas, discuss requirements, and resolve conflicts collaboratively.
• Observation: Observing users or stakeholders in their natural environment to
understand their workflows, behaviors, and challenges.
• Prototyping: Building prototypes or mockups of the software to elicit feedback and
refine requirements based on user interactions and experiences.
• Document Analysis: Reviewing existing documentation, reports, specifications, and
other artifacts to extract relevant information about the system or domain.
• Scenario-Based Techniques: Creating hypothetical scenarios or use cases to explore
different user interactions and requirements in specific contexts.

4. What are the metrics for estimation of software? State characteristics of feature point
metrics.

1. Metrics for Estimation of Software:


o Lines of Code (LOC): Measures the size of the software codebase in lines of
code, often used for estimating effort and cost.
o Function Points (FP): Measures the functionality provided by the software
independently of the implementation language, based on inputs, outputs,
inquiries, files, and interfaces.
o Cyclomatic Complexity: Measures the complexity of the software by
counting the number of linearly independent paths through the code, often
used for estimating testing effort.
o Halstead Complexity Measures: Quantifies software complexity based on
the number of unique operators and operands, providing estimates of program
volume, difficulty, and effort.
o Story Points: Relative measure of the complexity and effort required to
implement user stories or features in Agile development, based on expert
judgment and consensus.

Characteristics of Feature Point Metrics:

• Language Independence: Feature points are language-independent, allowing


estimation to be performed regardless of the programming language used.
• Focus on Functionality: Feature points primarily measure the functionality provided
by the software, making them suitable for estimating effort based on the user-visible
features.
• Objective and Consistent: Feature point metrics provide a standardized and
objective way to measure software functionality, enhancing consistency and
comparability across projects.
• Suitable for Early Estimation: Feature point estimation can be performed early in
the project lifecycle, even before detailed design or coding, providing valuable
insights into project scope and effort.

Group – C
(Long Answer Type Questions)
Each of 10 marks

5. a) What is cohesion? Explain the cohesion classification with respect to software


design. ‘A good software should have high cohesion but low coupling’- Explain.
b) What are the propositions of Putnam’s model? (6 + 4)

a) Cohesion:

Definition: Cohesion refers to the degree to which the elements within a module belong
together and work towards a common purpose or functionality. It is a measure of how closely
related and focused the responsibilities of the elements within a module are.

Classification of Cohesion:

1. Coincidental Cohesion: This is the lowest level of cohesion, where elements within a
module are grouped arbitrarily and have no logical relationship with each other.
Coincidental cohesion occurs when unrelated functionalities or tasks are combined
within a module simply because they happen to reside in the same module.
2. Logical Cohesion: Elements within a module perform related tasks, but there is no
significant relationship between the tasks. The grouping of elements is based on a
common category or function, but they are not tightly interrelated. Logical cohesion is
an improvement over coincidental cohesion but still lacks a strong logical structure.
3. Temporal Cohesion: Elements within a module are grouped together because they
are executed at the same time. This occurs when tasks are combined within a module
because they need to be performed in a specific sequence or within a particular time
frame, rather than because they are logically related.
4. Procedural Cohesion: Elements within a module are grouped together because they
are related and contribute to a single, well-defined task or objective. Procedural
cohesion occurs when elements within a module share common data or control flow
and work together to accomplish a specific purpose.
5. Communicational Cohesion: Elements within a module are grouped together
because they operate on the same data or share common inputs and outputs.
Communicational cohesion occurs when elements within a module interact closely
with each other and share data or communicate extensively.
6. Sequential Cohesion: Elements within a module are grouped together because they
are executed in a specific sequence, with the output of one element serving as the
input to the next. Sequential cohesion occurs when elements within a module are
arranged in a step-by-step sequence, such as in a procedural algorithm.
7. Functional Cohesion: This is the highest level of cohesion, where elements within a
module are grouped together because they perform a single, well-defined function or
task. Functional cohesion occurs when all elements within a module contribute to a
common objective and are closely related in terms of functionality.

Explanation: The statement "A good software should have high cohesion but low coupling"
emphasizes the importance of both cohesion and coupling in software design.

• High Cohesion: High cohesion ensures that elements within a module are closely
related and focused on a single task or functionality. This makes the module easier to
understand, maintain, and modify because it has a clear and well-defined purpose.
• Low Coupling: Low coupling refers to the degree of interdependence between
modules. Modules with low coupling are loosely connected and interact minimally
with each other. This promotes modularity, flexibility, and reusability, as changes to
one module are less likely to impact other modules.

By having high cohesion and low coupling, a software system becomes more modular,
maintainable, and scalable. Each module is self-contained, with clear responsibilities and
minimal dependencies on other modules. This design approach enhances software quality,
reduces complexity, and facilitates efficient development and maintenance processes.

b) Propositions of Putnam’s Model:

Putnam's model, also known as the Putnam Resource Allocation Model, is a cost estimation
model used in software engineering. The model is based on several propositions that form the
foundation of its estimation approach:

1. Effort is Proportional to Size: The effort required to develop a software project is


directly proportional to its size. Larger projects require more effort to develop and
implement.
2. Effort is Proportional to Product of Size and Complexity: Effort is not only
influenced by the size of the project but also by its complexity. More complex
projects, even if they are of similar size, require additional effort to develop and
manage.
3. Effort is Proportional to Power of Complexity: The influence of complexity on
effort is not linear but follows a power law relationship. Higher levels of complexity
have a disproportionate impact on effort compared to lower levels of complexity.
4. Effort is Proportional to Product of Size and Management Intensity: Effective
management practices can mitigate the impact of project size and complexity on
effort. Projects with higher levels of management intensity, such as rigorous planning,
coordination, and control, require less effort to develop.

6. a) What do you mean by forking and joining in Activity Diagram?


b) What are extend and include in use case diagram?
c) What are dependency, aggregation and composition in use case diagram? (4 + 3 + 3)

a) Forking and Joining in Activity Diagram:


Forking: Forking in an activity diagram represents the creation of parallel threads of
execution or concurrent activities. It allows multiple activities to be initiated simultaneously,
indicating that two or more paths of execution can be pursued concurrently without waiting
for each other.

Joining: Joining in an activity diagram represents the merging of parallel threads of


execution back into a single path. It indicates that multiple concurrent activities are
synchronized or merged at a common point, and the execution continues as a single flow.

Illustration: Consider an example where a process involves two tasks that can be executed
simultaneously and then need to be synchronized at a later point:

• Forking: At a certain point in the process, the diagram may split into two or more
paths, each representing a separate task or activity that can be executed concurrently.
This is the forking point.
• Joining: After the parallel activities are completed, the diagram may converge or join
back into a single path. This is the joining point, where the concurrent activities
synchronize and continue together.

b) Extend and Include in Use Case Diagram:

Extend: Extend relationship in a use case diagram indicates that one use case (the extension)
may optionally extend another base use case under certain conditions. It allows for additional
functionality to be added to the base use case when specific conditions are met.

Include: Include relationship in a use case diagram indicates that one use case (the including
use case) includes the functionality of another base use case. It signifies that the included use
case is always invoked by the including use case, representing a mandatory inclusion of
functionality.

c) Dependency, Aggregation, and Composition in Use Case Diagram:

Dependency: Dependency relationship in a use case diagram indicates that one use case
depends on another use case, typically for input, output, or other information exchange. It
represents a weaker form of relationship compared to association and is often denoted by a
dashed arrow.

Aggregation: Aggregation relationship in a use case diagram represents a whole-part


relationship between two use cases, where one use case (the whole) contains or consists of
another use case (the part). It signifies a weaker form of association and is often depicted
with a hollow diamond at the whole end.

Composition: Composition relationship in a use case diagram is a stronger form of


aggregation, where the whole-use case is responsible for the creation and management of the
part-use case. It indicates a stronger relationship than aggregation and is depicted with a filled
diamond at the whole end.

In summary, these relationships in a use case diagram help to depict the dependencies,
interactions, and structural associations between different use cases, enhancing the
understanding of the system's behavior and functionality.
7. a) What are the types of software maintenance? What is architectural evolution?
b) How the CASE tools are classified. Explain about software cost estimation.
c) What is the purpose of timeline chart? (5 + 5 + 5)

a) Types of Software Maintenance and Architectural Evolution:

Types of Software Maintenance:

1. Corrective Maintenance: Involves fixing defects or errors identified during


operation or use of the software.
2. Adaptive Maintenance: Involves modifying the software to accommodate changes
in the environment, such as hardware upgrades or changes in regulations.
3. Perfective Maintenance: Involves enhancing or optimizing the software to improve
performance, usability, or functionality based on user feedback or evolving
requirements.
4. Preventive Maintenance: Involves proactively identifying and addressing potential
issues or vulnerabilities in the software to prevent future failures or problems.

Architectural Evolution: Architectural evolution refers to the process of modifying or


refining the architecture of a software system over time to accommodate changing
requirements, technologies, or environmental factors. It involves making deliberate changes
to the system's architecture to improve its scalability, flexibility, maintainability, or other
quality attributes. Architectural evolution may include activities such as refactoring,
redesigning, or restructuring components, interfaces, or dependencies within the system to
align with current and future needs.

b) Classification of CASE Tools and Software Cost Estimation:

Classification of CASE Tools: CASE (Computer-Aided Software Engineering) tools can be


classified into several categories based on their primary functions:

1. Diagramming Tools: Tools for creating various diagrams and visual representations,
such as UML diagrams, data flow diagrams, and entity-relationship diagrams.
2. Modeling Tools: Tools for creating and analyzing software models, such as
requirements models, design models, and process models.
3. Code Generation Tools: Tools for automatically generating code from higher-level
design or modeling representations.
4. Documentation Tools: Tools for generating documentation, reports, and other
project artifacts from software models or code.
5. Version Control Tools: Tools for managing and tracking changes to software
artifacts, source code, and project documents.

Software Cost Estimation: Software cost estimation involves predicting the effort, time, and
resources required to develop or maintain a software system. It is crucial for budgeting,
planning, and managing software projects effectively. Various techniques and models are
used for software cost estimation, including:
1. Expert Judgment: Involves consulting with domain experts, project managers, or
experienced practitioners to estimate project costs based on their knowledge and
experience.
2. Algorithmic Models: Use mathematical algorithms and historical project data to
estimate costs based on factors such as project size, complexity, and productivity
rates. Examples include COCOMO (Constructive Cost Model) and function point
analysis.
3. Parametric Models: Use statistical analysis and regression techniques to estimate
costs based on a set of project parameters and historical data from similar projects.
4. Analogous Estimation: Involves using data from past similar projects as a basis for
estimating costs for the current project, assuming that similar projects will have
similar costs.
5. Top-Down and Bottom-Up Estimation: Top-down estimation starts with an overall
project estimate and then refines it based on detailed requirements, whereas bottom-
up estimation breaks down the project into smaller components and estimates costs
for each component separately.

c) Purpose of Timeline Chart:

A timeline chart, also known as a Gantt chart, is a visual representation of project tasks,
activities, and milestones plotted against a timeline. The purpose of a timeline chart is to
provide a graphical overview of the project schedule, including start and end dates, durations,
dependencies, and progress. It helps project managers, team members, and stakeholders to:

• Plan and Schedule: Identify and schedule project activities, tasks, and milestones
based on their start and end dates, durations, and dependencies.
• Track Progress: Monitor the progress of project activities and tasks over time,
identifying delays, bottlenecks, or areas where additional resources may be required.
• Manage Resources: Allocate resources, personnel, and equipment effectively by
visualizing their availability and utilization across different project phases.
• Communicate: Communicate project schedules, timelines, and milestones to team
members, stakeholders, and clients, ensuring alignment and understanding of project
goals and expectations.
• Identify Dependencies: Identify dependencies between different project activities
and tasks, ensuring that they are sequenced and coordinated effectively to minimize
delays and conflicts.

In summary, timeline charts provide a comprehensive and easy-to-understand visualization of


project schedules, enabling effective planning, tracking, resource management, and
communication throughout the project lifecycle.

You might also like