CSE 1 Solution
CSE 1 Solution
Group – A
(Multiple Choice Type Questions)
a) Higher amount of risk analysis b) Doesn't work well for smaller projects c) Additional
functionalities are added later on d) Strong approval and documentation control
(iv) Which of the following uses empirically derived formulas to predict effort as a function
of LOC or FP?
a) The prototyping model facilitates the reusability of components. b) RAD Model facilitates
reusability of components c) Both RAD & Prototyping Model facilitates reusability of
components d) None
(vi) Which one of the following activities is not recommended for software processes in
software engineering?
(vii) On what basis is plan-driven development different from that of the software
development process?
a) Based on the iterations that occurred within the activities. b) Based on the output, which
is derived after negotiating in the software development process. c) Based on the interleaved
specification, design, testing, and implementation activities. d) All of the above
(viii) The main activity of the design phase of the system life cycle is to?
a) Replace the old system with the new one b) Develop and test the new system c)
Understand the current system d) Propose alternatives to the current system
(ix) A system analyst does not need to consider _____?
Group – B
(Short Answer Type Questions)
Each of 5 marks
2. Which life cycle model would you follow for developing software for each of the
following applications? Justify your selection of model with the help of an appropriate
reason.
a) A Game
b) A Text editor (2 + 3)
a) A Game
Reason: Game development benefits from the iterative nature of Agile because it allows for
continuous feedback and improvement. Games often require frequent testing, user feedback,
and refinement of features and mechanics. An iterative approach enables developers to build
a playable version quickly, test it with users, and make adjustments based on the feedback,
ensuring that the final product is engaging and meets user expectations.
b) A Text Editor
Reason: Developing a text editor can be more straightforward and less prone to frequent
changes in requirements compared to a game. The Waterfall model works well here as it
allows for a clear definition of requirements, followed by systematic design, implementation,
and testing phases. Once the features and functionalities are well-defined, the development
process can proceed in a structured manner, ensuring that all specified requirements are met
without the need for constant iteration.
1. Correctness: The SRS should accurately describe the system to be built. It should
contain all necessary requirements agreed upon by all stakeholders.
2. Unambiguity: The requirements should be stated clearly without any ambiguity.
Each requirement should be interpreted in only one way.
3. Completeness: The SRS should include all significant requirements, including
responses to all possible inputs and conditions, coverage of all software
functionalities, and constraints.
4. Consistency: There should be no conflicting requirements in the SRS. Consistency
must be maintained across all sections of the document.
5. Verifiability: Each requirement should be stated in such a way that it can be verified
through testing, inspection, analysis, or demonstration.
4. a) Why is the intermediate COCOMO expected to give more accurate estimates than
the basic COCOMO?
The basic COCOMO model assumes that effort and development time are
functions of the product size alone. However, a host of other project parameters besides the
product size affect the effort required to develop the product as well as the development time.
Therefore, in order to obtain an accurate estimation of the effort and project duration, the
effect of all relevant parameters must be taken into account. The intermediate COCOMO
model recognizes this fact and refines the initial estimate obtained using the basic COCOMO
expressions by using a set of 15 cost drivers (multipliers) based on various attributes of
software development. That's why intermediate COCOMO
is expected to give more accurate estimates compared to the basic COCOMO
b) Use a schematic diagram to show the order in COCOMO estimation technique for
i) cost
ii) effort
iii) duration
iv) size (3 + 2)
The COCOMO (Constructive Cost Model) estimation technique provides a framework for
estimating the cost, effort, duration, and size of a software project. Below is a schematic
diagram that outlines the order of estimation for these parameters in the COCOMO model.
Explanation:
In this schematic, the estimation starts from the project size, moves on to effort, then
duration, and finally the cost. This order reflects the logical flow of the COCOMO model
where each parameter is dependent on the previous one.
Group – C
(Long Answer Type Questions)
Each of 10 marks
b) What are the different types of team structure followed in software projects? Discuss
them briefly. (5)
• Description: A senior, highly skilled programmer (the chief programmer) leads the
team and makes critical decisions. Other team members assist by implementing the
chief programmer’s designs and instructions.
• Advantages: Strong leadership and clear decision-making, high-quality design and
implementation driven by an experienced expert.
• Disadvantages: Potential bottleneck if the chief programmer becomes a single point
of failure, can limit team members' growth and contribution.
• Description: All team members have equal status, and decisions are made
collectively. The focus is on collaboration and knowledge sharing rather than
hierarchy.
• Advantages: Promotes open communication, innovation, and shared responsibility,
leading to higher morale and creativity.
• Disadvantages: Potential for conflict and slower decision-making due to the lack of a
clear leader, may struggle with accountability.
A system analyst plays a crucial role in the software development process. The primary
responsibilities include:
The Prototype Model involves creating a preliminary version of the software to demonstrate
concepts and test functionalities before developing the final product. While it has several
benefits, it also comes with disadvantages:
A cost-benefit analysis in project management is a tool to evaluate the costs vs. benefits of an
important project or business proposal. It is a practical, data-driven approach for guiding
organizations and managers in making solid investment decisions. It helps determine if a
project or investment is financially feasible and beneficial for the organization.
A formal CBA identifies and quantifies all project costs and benefits, then calculates the
expected return on investment (ROI), internal rate of return (IRR), net present value (NPV),
and payback period. The difference between the costs and the benefits of moving forward
with the project is then calculated.
• Direct costs: These are costs that are directly related to the proposed project or
investment, e.g., materials, labor, and equipment.
• Indirect costs: These are related fixed costs that contribute to bringing the project or
investment to life, e.g., overhead, administrative, or training expenses.
• Opportunity costs: These are the benefits or opportunities foregone when a business
chooses one project or opportunity over others. To quantify opportunity costs, you
must weigh the potential benefits of the available alternatives.
• Future costs: These are costs that may come up later in the project. These costs
depend on certain factors happening, e.g., costs of mitigating potential risks.
• Cost-benefit analysis facilitates a structured cost management process, helping project
managers and company executives prioritize projects and allocate resources
effectively to achieve the organization’s main goals.
• Tangible benefits: These are measurable outcomes that can be easily quantified in
monetary terms, e.g., increased revenue or reduced costs.
• Intangible benefits: These benefits are difficult to measure in monetary terms. They
are indirect or qualitative outcomes, such as improved customer satisfaction or
increased employee morale.
Although intangible benefits may be difficult to quantify in financial terms, it is necessary to
factor them in when conducting a CBA, as they still have a significant impact on the overall
value of a project.
1. Net Present Value (NPV): This technique calculates the present value of all future
cash inflows and outflows associated with a project or decision, discounted at an
appropriate discount rate. A positive NPV indicates that the project or decision is
profitable and should be accepted.
2. Benefit-Cost Ratio (BCR): The BCR is calculated by dividing the present value of the
project's benefits by the present value of its costs. A BCR greater than 1 indicates that
the project or decision is economically viable and should be accepted.
3. Internal Rate of Return (IRR): The IRR is the discount rate that makes the NPV of a
project or decision equal to zero. If the IRR is greater than the required rate of return
or the cost of capital, the project or decision is considered acceptable.
4. Payback Period: This technique calculates the length of time it takes for the
cumulative cash inflows from a project or decision to equal the initial investment. A
shorter payback period is generally preferred, as it indicates a quicker return on
investment.
5. Break-Even Analysis: Break-even analysis determines the point at which total costs
equal total benefits, indicating no net loss or gain. It helps identify the minimum
performance required for a project to be viable.
Group – A
(Multiple Choice Type Questions)
Each of 1 marks
a) To illustrate the system architecture b) To depict the system inputs and outputs c)
To depict the system data and relationships d) To illustrate the system processes
(iii) Which of the following is not a component of UML (Unified Modeling Language)?
a) To depict the system inputs and outputs b) To depict the system processes
c) To depict the system data and relationships d) To illustrate the system architecture
(vi) What is the purpose of a data dictionary?
1. Ease of Visualization:
o Decision trees provide a graphical representation of decision-making
processes, making them easier to understand and visualize compared to
decision tables, which are typically presented in tabular form. This visual
clarity can aid in communication and interpretation of complex decision logic.
2. Handling Continuous Variables:
o Decision trees can handle both categorical and continuous variables naturally,
allowing for more flexible modeling of decision-making scenarios. Decision
tables, on the other hand, are more suited for discrete, categorical inputs and
outputs, and may require additional processing for continuous variables.
3. Ability to Capture Non-Linear Relationships:
o Decision trees are capable of capturing non-linear relationships between input
variables and outcomes through recursive partitioning of the data space. This
enables them to model more complex decision boundaries compared to
decision tables, which may struggle to represent non-linear relationships
effectively.
A physical DFD depicts how data flows through a system at the implementation level,
showing the actual processes, data stores, and external entities involved in the system. It
represents the system as it will be implemented, including hardware components, software
modules, and the physical flow of data between them.
Example of a Physical DFD: Consider an online shopping system. In a physical DFD, you
might represent the actual servers, databases, and network connections involved in the
system. For instance, you could illustrate the process of a user placing an order by showing
the interaction between the web server, the database server storing product information, and
the payment gateway.
A logical DFD focuses on the functional aspects of a system without considering the
implementation details. It abstractly represents the system's processes, data flows, data stores,
and external entities, emphasizing the flow of information and the logical relationships
between components.
Example of a Logical DFD: Continuing with the online shopping system example, a logical
DFD might depict the high-level processes involved in the system, such as "Manage
Inventory," "Process Orders," and "Handle Payments." It would illustrate how data flows
between these processes, including inputs from external entities like customers and outputs to
fulfillment centers and payment processors. The logical DFD would not specify the specific
servers or databases involved but would focus on the functional flow of information within
the system.
Group – C
(Long Answer Type Questions)
Each of 10 marks
In summary, use PERT charts when managing projects with high uncertainty and complexity,
and use Gantt charts for projects with well-defined tasks and known durations.
• Version Control: Baselines provide a reference point for version control, allowing
developers to track changes made to the software over time and revert to previous
versions if necessary.
• Quality Assurance: Baselines help ensure the quality and consistency of the software
by defining a standard configuration that has undergone testing and validation.
• Change Management: Baselines facilitate change management by establishing a
clear starting point for new development efforts and documenting the state of the
software at key milestones.
• Configuration Management: Baselines are used to manage and control
configuration items (CIs), such as source code, documentation, and executable files,
throughout the software development lifecycle.
Overall, baselines play a critical role in ensuring the integrity, reliability, and traceability of
software configurations in SCM.
CASE refers to the use of computer-based tools and techniques to support various activities
in the software development process, including analysis, design, coding, testing, and
maintenance. CASE tools automate repetitive tasks, provide visual modeling capabilities, and
facilitate collaboration among team members. Some common features of CASE tools include:
• Requirements management
• Diagramming and modeling
• Code generation
• Version control
• Testing and debugging
• Documentation generation
1. Unit Testing:
o Goal: To verify the correctness of individual units or components of the
software, typically at the code level.
o Focus: Identifying defects in code logic, ensuring that each unit functions as
intended, and validating the behavior of individual functions or methods.
2. Integration Testing:
o Goal: To test the interaction between different units or modules when
combined together.
o Focus: Detecting defects in the interfaces and interactions between modules,
ensuring that data flows correctly between components, and validating the
integration of units within the larger system.
3. System Testing:
o Goal: To evaluate the behavior of the entire system as a whole, including its
functionality, performance, and reliability.
o Focus: Verifying that the system meets its specified requirements, validating
its overall functionality from an end-to-end perspective, and identifying any
defects that arise when the system is used in a realistic environment.
4. Acceptance Testing:
o Goal: To determine whether the system satisfies the acceptance criteria and is
ready for deployment to the end-users.
o Focus: Validating that the system meets the user's needs, ensuring that it
aligns with business requirements, and gaining approval from stakeholders for
deployment.
Software does not wear out in the same way that physical hardware does because software is
not subject to the same types of physical degradation over time. The key differences between
software and hardware in terms of wear and tear are:
• Physical Nature: Hardware components are physical objects made of materials that
degrade over time due to factors such as friction, heat, and exposure to environmental
conditions. In contrast, software consists of digital instructions stored electronically,
which do not degrade physically.
• Maintenance and Updates: Software can be updated, maintained, and patched to fix
bugs, add new features, or improve performance without degradation. On the other
hand, hardware components may need to be replaced entirely if they become worn out
or obsolete.
• Endurance and Lifespan: Hardware components have a limited lifespan determined
by their material properties and usage, and they degrade over time with normal use. In
contrast, software can theoretically last indefinitely if properly maintained and
updated.
• Obsolescence: Hardware components become obsolete as technology advances and
newer, more efficient components are developed. Software, while also subject to
obsolescence as new versions and technologies emerge, can often be updated or
adapted to work with newer hardware.
Low cohesion in a module means that the elements within the module are loosely related and
do not contribute to a single, well-defined purpose. This can lead to several problems:
Group – A
(Multiple Choice Type Questions)
Each of 1 marks
a) Abstract data types are the same as classes b) Abstract data types do not allow
inheritance c) Classes cannot inherit from the same base class
d) Object have state and behavior
a) In the first loop b) in the first and second loop c) In every loop
d) before using spiral model
(vi) Each time a defect gets detected and fixed, the reliability of a software product
(vii) In function point analysis, number of general system characteristics used to rate the system are
a) 10 b) 14 c) 20 d) 12
(viii) Requirements can be refined using
Group – B
(Short Answer Type Questions)
Each of 5 marks
2. Explain about software quality assurance
Software Quality Assurance (SQA) is a systematic process designed to ensure that software
products and processes meet defined quality standards and perform as expected. SQA
encompasses a variety of activities throughout the software development lifecycle, including
process monitoring, standards compliance, and testing. The primary goal of SQA is to
identify and address defects early in the development process, thereby improving the
reliability, performance, and usability of the final product.
SQA involves both preventive and corrective measures. Preventive measures focus on
improving the development process to reduce the likelihood of defects, such as implementing
coding standards, conducting code reviews, and using automated testing tools. Corrective
measures involve detecting and fixing defects through activities like debugging and user
acceptance testing.
By integrating SQA practices, organizations can minimize risks, reduce costs associated with
post-release defects, and ensure that the software meets both functional and non-functional
requirements, ultimately leading to higher customer satisfaction.
Software validation and verification are two critical components of the software quality
assurance process that ensure a software product meets its requirements and specifications.
Together, verification and validation help ensure that the software is both built correctly and
fulfills its intended purpose, leading to higher quality and more reliable software products.
White box testing and black box testing are two fundamental approaches to software testing,
each with distinct techniques and focuses.
• White Box Testing: White box testing, also known as clear box or structural testing,
involves testing the internal structures or workings of an application. Testers have
access to the source code and use their knowledge of the code structure, algorithms,
and logic to design test cases. Techniques include statement coverage, branch
coverage, path coverage, and unit testing. The goal is to ensure that the internal
operations are performing as expected and to identify any hidden errors or security
vulnerabilities. This type of testing is typically performed by developers or testers
with programming knowledge.
• Black Box Testing: Black box testing, also known as behavioral or functional testing,
focuses on testing the software's functionality without any knowledge of the internal
code structure. Testers evaluate the software based on the inputs provided and the
outputs produced, ensuring it behaves according to the specified requirements.
Techniques include equivalence partitioning, boundary value analysis, decision table
testing, and use case testing. The goal is to validate the external behavior of the
software, making it suitable for end-user acceptance testing. This type of testing is
usually performed by quality assurance professionals or end-users who interact with
the software as a black box.
Both approaches are essential for comprehensive software testing, ensuring both internal code
integrity and external functionality.
Group – C
(Long Answer Type Questions)
Each of 10 marks
b) What is meant by stub? What is a driver? In which testing are they required?
Explain briefly. (4 + 6)
Stub:
A stub is a piece of code used in software testing to simulate the behavior of a module or
component that a module under test depends on. Stubs are typically used when the actual
module is not yet developed, unavailable, or impractical to use during testing. The stub
provides the necessary responses to the calls made by the module under test, allowing the test
to proceed without the actual dependent module.
Example: If module A calls module B, and module B is not yet implemented, a stub for
module B would be created to return pre-defined responses to module A's calls, enabling the
testing of module A independently.
Driver:
A driver is a piece of code used in software testing to simulate the behavior of a module that
interacts with the module under test from a higher level. Drivers are typically used when the
module under test is a lower-level module and the higher-level controlling modules are not
yet developed or available. The driver calls the functions of the module under test and
provides the necessary input data.
Example: If module A depends on module B, and module A is not yet implemented, a driver
for module A would be created to call module B's functions with the required input data,
enabling the testing of module B independently.
Testing Context:
Stubs and drivers are primarily required in Integration Testing, specifically in Incremental
Integration Testing approaches such as Top-Down and Bottom-Up integration testing.
6. Consider Roxy Roll center, a restaurant near College Street, Kolkata, owned by
Saurav. Some are convinced that its Egg-Chicken Rolls are the best in College Street.
Many people, especially Presidency University students and faculties, frequently eat at
Roxy. The restaurant uses an information system that takes customer orders, sends the
orders to the kitchen, monitors goods sold and inventory and generates reports for
management.
Draw the context diagram and Level 1 DFD for the Roxy’s food ordering system. Also
draw a level 2 DFD that will show the decomposition of any one process from level 1
DFD.
(3 + 4 + 3)
7. Write short notes on
i) UML diagrams
ii) Integration testing and load testing
i) UML diagrams
Unified Modeling Language (UML) diagrams are a set of graphical notations used to create
abstract models of software systems. UML is a standardized modeling language that helps in
visualizing, specifying, constructing, and documenting the artifacts of software systems.
UML diagrams can be broadly categorized into two types: structural diagrams and behavioral
diagrams.
• Structural Diagrams: These diagrams represent the static aspects of the system.
o Class Diagram: Shows the classes in the system, their attributes, operations,
and the relationships between the classes.
o Object Diagram: Represents a snapshot of the objects in the system and their
relationships at a specific point in time.
o Component Diagram: Depicts the components of the system and how they
are wired together.
o Deployment Diagram: Illustrates the physical deployment of artifacts on
nodes.
o Package Diagram: Organizes classes into packages, showing dependencies
between packages.
• Behavioral Diagrams: These diagrams represent the dynamic aspects of the system.
o Use Case Diagram: Represents the functionality of the system from a user
perspective, showing actors and use cases.
o Sequence Diagram: Shows object interactions arranged in time sequence,
highlighting how objects communicate.
o Activity Diagram: Illustrates the workflow or activities within the system.
o State Diagram: Describes the states an object goes through and the transitions
between these states.
o Collaboration Diagram: Focuses on the structural organization of objects
that send and receive messages.
Group – A
(Multiple Choice Type Questions)
Each of 1 marks
1. (i) The feature of the object oriented paradigm which helps code reuse is
(ii) All activities lying on critical path have slack time equal to
a) 0 b) 1 c) 2 d) None of above
(iii) Alpha and Beta Testing are forms of
Group – B
(Short Answer Type Questions)
Each of 5 marks
2. What are the factors affecting coupling? What is relationship between coupling and
cohesion? (2 + 3)
1. Type of Interaction: The nature of the interaction between modules, such as data
coupling, control coupling, or content coupling, impacts the level of coupling. Data
coupling, where only data is shared, is preferred over control or content coupling.
2. Interface Complexity: The complexity of the module interfaces, including the
number and types of parameters, can increase or decrease coupling. Simpler interfaces
usually result in lower coupling.
3. Module Communication: The method of communication between modules, whether
direct calls, shared data, or message passing, influences coupling. Direct calls and
shared data tend to increase coupling.
4. Change Propagation: The likelihood that changes in one module will necessitate
changes in another module is a significant factor. High change propagation indicates
higher coupling.
Relationship:
3. What is formal technical review (FTR)? What are the differences between fault,
failure and error? (2 + 3)
A Formal Technical Review (FTR) is a structured and organized process in which a software
product or its components are examined by a team of reviewers to identify defects and ensure
adherence to standards and requirements. The primary objectives of an FTR are to improve
software quality, verify that the software meets its requirements, and ensure that the
development process is being followed correctly.
a) Blocking State:
In the context of operating systems and multithreading, the term "blocking state" refers to a
situation where a process or thread is unable to continue execution until a specific event or
condition is met. This state occurs when a process requests a resource that is not currently
available or waits for an event that has not yet occurred.
Key Points:
• Waiting for Resources: A process may enter the blocking state when it needs to
access a resource, such as I/O devices, files, or network connections, which are
currently in use or unavailable.
• Synchronization: In multithreading, a thread may block while waiting for a
synchronization primitive, such as a mutex, semaphore, or condition variable, to be
released.
• Event Waiting: Processes or threads can block while waiting for specific events, such
as user input, signals, or inter-process communication messages.
• State Transition: When the required resource becomes available or the awaited event
occurs, the process or thread transitions from the blocking state to the ready state,
where it can resume execution.
1. Data Element Name: The unique identifier or name of the data item.
2. Data Type: The type of data (e.g., integer, float, string, date).
3. Description: A brief explanation of the data element and its purpose.
4. Length/Size: The size or length of the data element (e.g., maximum number of
characters for a string).
5. Default Value: The initial value assigned to the data element if no other value is
provided.
6. Constraints: Any rules or restrictions on the data element, such as range limits,
allowed values, or validation criteria.
7. Source: The origin of the data element, such as the source system or data entry point.
8. Relationships: Information about how the data element relates to other data elements,
including foreign key references and dependencies.
9. Owner: The person or role responsible for the data element.
10. Usage: Details on how and where the data element is used within the system.
11. Example Values: Sample data values for illustration and better understanding.
Group – C
(Long Answer Type Questions)
Each of 10 marks
a) Software Reliability
Software reliability refers to the probability that a software system will function without
failure under specified conditions for a given period of time. It is a critical aspect of software
quality and reflects the dependability of the software. High reliability indicates that the
software is less likely to fail and can be trusted to perform its intended functions accurately
and consistently. Software reliability is influenced by factors such as the complexity of the
code, the quality of the design, the thoroughness of testing, and the effectiveness of error
detection and correction mechanisms.
{ while (x! = y)
{ if ( x > y )
x = x – y;
else
Y=y–x;
return x
}
The suite should include control flow graph, independent paths,
cyclomatic complexity (using two different techniques). Define
cyclomatic complexity. (8 + 2)
Cyclomatic Complexity is a software metric used to measure the complexity of a program's
control flow. It quantifies the number of linearly independent paths through a program's
source code. In other words, it represents the number of decision points or branches within
the code, indicating the number of possible paths that can be taken during program execution.
The cyclomatic complexity of a program is calculated using the control flow graph, where
nodes represent individual statements or decision points, and edges represent the flow of
control between these statements. The formula to compute cyclomatic complexity is:
V(G)=E−N+2P
Where:
7.
Group – A
(Multiple Choice Type Questions)
Each of 1 marks
1. (i) Changes made to an information system to add the desired but not necessarily the
required features is called
(ii) All the modules of the system are integrated and tested as complete system in the case of
Group – B
(Short Answer Type Questions)
Each of 5 marks
4. What are the metrics for estimation of software? State characteristics of feature point
metrics.
Group – C
(Long Answer Type Questions)
Each of 10 marks
a) Cohesion:
Definition: Cohesion refers to the degree to which the elements within a module belong
together and work towards a common purpose or functionality. It is a measure of how closely
related and focused the responsibilities of the elements within a module are.
Classification of Cohesion:
1. Coincidental Cohesion: This is the lowest level of cohesion, where elements within a
module are grouped arbitrarily and have no logical relationship with each other.
Coincidental cohesion occurs when unrelated functionalities or tasks are combined
within a module simply because they happen to reside in the same module.
2. Logical Cohesion: Elements within a module perform related tasks, but there is no
significant relationship between the tasks. The grouping of elements is based on a
common category or function, but they are not tightly interrelated. Logical cohesion is
an improvement over coincidental cohesion but still lacks a strong logical structure.
3. Temporal Cohesion: Elements within a module are grouped together because they
are executed at the same time. This occurs when tasks are combined within a module
because they need to be performed in a specific sequence or within a particular time
frame, rather than because they are logically related.
4. Procedural Cohesion: Elements within a module are grouped together because they
are related and contribute to a single, well-defined task or objective. Procedural
cohesion occurs when elements within a module share common data or control flow
and work together to accomplish a specific purpose.
5. Communicational Cohesion: Elements within a module are grouped together
because they operate on the same data or share common inputs and outputs.
Communicational cohesion occurs when elements within a module interact closely
with each other and share data or communicate extensively.
6. Sequential Cohesion: Elements within a module are grouped together because they
are executed in a specific sequence, with the output of one element serving as the
input to the next. Sequential cohesion occurs when elements within a module are
arranged in a step-by-step sequence, such as in a procedural algorithm.
7. Functional Cohesion: This is the highest level of cohesion, where elements within a
module are grouped together because they perform a single, well-defined function or
task. Functional cohesion occurs when all elements within a module contribute to a
common objective and are closely related in terms of functionality.
Explanation: The statement "A good software should have high cohesion but low coupling"
emphasizes the importance of both cohesion and coupling in software design.
• High Cohesion: High cohesion ensures that elements within a module are closely
related and focused on a single task or functionality. This makes the module easier to
understand, maintain, and modify because it has a clear and well-defined purpose.
• Low Coupling: Low coupling refers to the degree of interdependence between
modules. Modules with low coupling are loosely connected and interact minimally
with each other. This promotes modularity, flexibility, and reusability, as changes to
one module are less likely to impact other modules.
By having high cohesion and low coupling, a software system becomes more modular,
maintainable, and scalable. Each module is self-contained, with clear responsibilities and
minimal dependencies on other modules. This design approach enhances software quality,
reduces complexity, and facilitates efficient development and maintenance processes.
Putnam's model, also known as the Putnam Resource Allocation Model, is a cost estimation
model used in software engineering. The model is based on several propositions that form the
foundation of its estimation approach:
Illustration: Consider an example where a process involves two tasks that can be executed
simultaneously and then need to be synchronized at a later point:
• Forking: At a certain point in the process, the diagram may split into two or more
paths, each representing a separate task or activity that can be executed concurrently.
This is the forking point.
• Joining: After the parallel activities are completed, the diagram may converge or join
back into a single path. This is the joining point, where the concurrent activities
synchronize and continue together.
Extend: Extend relationship in a use case diagram indicates that one use case (the extension)
may optionally extend another base use case under certain conditions. It allows for additional
functionality to be added to the base use case when specific conditions are met.
Include: Include relationship in a use case diagram indicates that one use case (the including
use case) includes the functionality of another base use case. It signifies that the included use
case is always invoked by the including use case, representing a mandatory inclusion of
functionality.
Dependency: Dependency relationship in a use case diagram indicates that one use case
depends on another use case, typically for input, output, or other information exchange. It
represents a weaker form of relationship compared to association and is often denoted by a
dashed arrow.
In summary, these relationships in a use case diagram help to depict the dependencies,
interactions, and structural associations between different use cases, enhancing the
understanding of the system's behavior and functionality.
7. a) What are the types of software maintenance? What is architectural evolution?
b) How the CASE tools are classified. Explain about software cost estimation.
c) What is the purpose of timeline chart? (5 + 5 + 5)
1. Diagramming Tools: Tools for creating various diagrams and visual representations,
such as UML diagrams, data flow diagrams, and entity-relationship diagrams.
2. Modeling Tools: Tools for creating and analyzing software models, such as
requirements models, design models, and process models.
3. Code Generation Tools: Tools for automatically generating code from higher-level
design or modeling representations.
4. Documentation Tools: Tools for generating documentation, reports, and other
project artifacts from software models or code.
5. Version Control Tools: Tools for managing and tracking changes to software
artifacts, source code, and project documents.
Software Cost Estimation: Software cost estimation involves predicting the effort, time, and
resources required to develop or maintain a software system. It is crucial for budgeting,
planning, and managing software projects effectively. Various techniques and models are
used for software cost estimation, including:
1. Expert Judgment: Involves consulting with domain experts, project managers, or
experienced practitioners to estimate project costs based on their knowledge and
experience.
2. Algorithmic Models: Use mathematical algorithms and historical project data to
estimate costs based on factors such as project size, complexity, and productivity
rates. Examples include COCOMO (Constructive Cost Model) and function point
analysis.
3. Parametric Models: Use statistical analysis and regression techniques to estimate
costs based on a set of project parameters and historical data from similar projects.
4. Analogous Estimation: Involves using data from past similar projects as a basis for
estimating costs for the current project, assuming that similar projects will have
similar costs.
5. Top-Down and Bottom-Up Estimation: Top-down estimation starts with an overall
project estimate and then refines it based on detailed requirements, whereas bottom-
up estimation breaks down the project into smaller components and estimates costs
for each component separately.
A timeline chart, also known as a Gantt chart, is a visual representation of project tasks,
activities, and milestones plotted against a timeline. The purpose of a timeline chart is to
provide a graphical overview of the project schedule, including start and end dates, durations,
dependencies, and progress. It helps project managers, team members, and stakeholders to:
• Plan and Schedule: Identify and schedule project activities, tasks, and milestones
based on their start and end dates, durations, and dependencies.
• Track Progress: Monitor the progress of project activities and tasks over time,
identifying delays, bottlenecks, or areas where additional resources may be required.
• Manage Resources: Allocate resources, personnel, and equipment effectively by
visualizing their availability and utilization across different project phases.
• Communicate: Communicate project schedules, timelines, and milestones to team
members, stakeholders, and clients, ensuring alignment and understanding of project
goals and expectations.
• Identify Dependencies: Identify dependencies between different project activities
and tasks, ensuring that they are sequenced and coordinated effectively to minimize
delays and conflicts.