ЗМІСТ
INTRODUCTION.....................................................................................................................2
SECTION 1. ANALYSIS OF THE SUBJECT AREA.............................................................3
1.1 Electronic surveys and testing.........................................................................................3
1.2 Psychological testing.......................................................................................................9
1.3 Principles and trends in web development....................................................................12
CHAPTER 2. DESIGNING A PSYCHOLOGICAL TESTING SYSTEM...........................17
2.1 Software life cycle and architecture..............................................................................17
2.2 Designing the system functionality...............................................................................25
2.3 Design of the internal structure.....................................................................................27
2.4 Database design.............................................................................................................30
SECTION 3. DEVELOPMENT AND TESTING OF A SOFTWARE PRODUCT..............31
3.1 Choosing development tools.........................................................................................31
3.2 Development of a graphical interface...........................................................................40
3.3 Development of basic system algorithms......................................................................46
3.4 Testing the system.........................................................................................................50
CONCLUSIONS.....................................................................................................................52
LIST OF REFERENCES........................................................................................................53
INTRODUCTION
In today's world, where digital technologies are changing the way we live and work, the
importance of integrating these technologies in the field of psychology is relevant and indispensable.
The development of a website for planning and conducting experimental psychological research
meets the need for modern, efficient and accessible tools for researchers and practitioners in this
field. The purpose of this thesis is to create and analyse a web resource that would facilitate the
effective planning and implementation of psychological research. The subject of the research is the
processes of designing, developing and implementing software for psychological testing, while the
object is the psychological testing system itself, implemented on the Django platform.
The work is expected to achieve important results, including the development of a functional
and intuitive interface, ensuring the reliability of data storage and processing, and creating a
convenient tool for analysing test results. The practical value of the work lies in the creation of a
product that can be used in real-world research, providing psychologists, researchers and students
with a powerful tool for conducting and analysing psychological tests. The scientific novelty of the
work is manifested in an integrated approach to the development of a system that integrates modern
web development methodologies and the specific needs of psychological testing, making it relevant
and in line with modern requirements.
SECTION 1. ANALYSIS OF THE SUBJECT AREA
1.1 Electronic surveys and testing
Online surveys (Internet surveys, web surveys) are a method of collecting sociological
information based on the use of Internet technology. Most often, online surveys are conducted using
an interactive questionnaire posted on a website and filled in from a computer or mobile device
online[1].
The first recorded survey conducted via remote interaction (e-mail) is considered to be the
study by Kiesler and Sproull (1986), which examined e-mail communication within a company.
Developing a model of limited non-verbal means ("reduced social cues"), the authors argued that e-
mail is an impoverished form of communication, as it provides an insufficient number of "social
cues". However, it was noted that due to its lower cost, online surveys will be more widely used in
the future. Online surveys have been conducted since 1994 (Kehoe & Pitkow 1996). The earliest
ones were initiated by individual organisations to study the market or to obtain feedback from
consumers on the quality of services. For example, in 1996, the American association CASRO
(unavailable link) conducted a survey among leading US companies, which found that 64% of
respondents highly appreciated the potential of online surveys and intended to use them in the near
future.
The online survey method has evolved from using email to creating panels. The survey by
sending questionnaires by e-mail (e-mail survey) was simple and did not require special computer
training from the researcher. Initially, the processing of returned questionnaires was done manually,
but later the software allowed for semi-automated processing. Email surveys were most suitable for
local research, for example, within institutions. Surveys of large audiences were conducted in
newsgroups, which were created to discuss certain topics. The text of the questionnaire was posted
on the website and updated every few days so that new visitors could see it and enter their answers.
This method allowed us to survey target groups that are difficult to engage in surveys (for example,
alcoholics anonymous).
Surveys on Internet forums or teleconferences allowed for open-ended questions and were
good for interviewing experts. Another technology, a questionnaire posted on web pages using html
format, allowed for a large number of questions and automated processing of results. Creating
surveys on web pages is no longer possible without technical training. Online focus groups have
been widely used in marketing research, allowing for real-time interviews with several respondents
[2]. Online research sampling is ensured by recruiting potential respondents in an online panel. Pre-
filling out a questionnaire on the respondent's socio-demographic data allows for quota sampling for
specific research purposes [3].
Methodological discussions on the use of online surveys also began to be actively conducted
since the mid-1990s. It was pointed out that online surveys can reach the largest available sample at
the lowest cost and in the shortest time compared to any other forms of research that existed before
(Anderson & Gansneder 1995; O'Lear 1996; Kehoe & Pitkow 1996; Schmidt 1999; ). Sheehan &
McMillan (1999) found that a questionnaire sent to a respondent by regular mail was returned
completed on average in 11.8 days, while the return period was reduced to 7.6 days when sent
electronically. Other researchers, for example, Harris (1997) reported even shorter response times
for e-mail surveys (2-3 days on average).
Dillman (2000), noting that online surveys are based on two previous changes in the field of
research, namely random sampling of respondents (1940s) and the introduction of telephone surveys
(1970s), called them the most significant methodological transformation of research activities.
Couper (2000), in his turn, emphasised that the social implications of the spread of online surveys
are as significant as the methodological ones: the availability and low cost of technology make it
possible for any interested person to conduct research, not just professional researchers, i.e. a kind of
democratisation" of this field is taking place. However, a number of authors have expressed concern
about the possible negative consequences of this phenomenon, primarily methodological and ethical,
as well as the danger of respondents being overwhelmed by participation in surveys.
As early as 1994, the issue of a declining response rate was raised (Schuldt and Totten 1994).
Comley (1996) noted that due to the novelty of the technology used, the response rate to an e-mail
survey was 45% compared to 16% for the same questionnaire distributed by regular mail. However,
Lozar and his co-authors later found in a meta-analysis of data reflecting the response rate to
questionnaires distributed in different ways that online surveys performed the worst (Lozar,
Bosnjak, Berzelak, Haas and Vehovar 2008). The problems of representativeness of online survey
samples have also been widely discussed (Swoboda, Muehlberger, Weitkunat and Schneeweiss
1997; Schaefer and Dillman 1998; Dillman 2000). Coomber (1997) argued that the majority of
respondents to online surveys are wealthy and educated white males from the developed world.
Currently, the need to change and improve the methodology of online surveys is emphasised due to
the spread of Internet technologies and the increase in the number of users of online resources: the
focus of researchers' attention is shifting from studying the activities of an elite minority to
considering the mass activity of users of online resources around the world (Hooley, Wellens,
Marriott 2011).
Methodology of the online survey
The online survey follows the logic of traditional survey methodology. The objective of a
mass survey is to determine the relationship between different variables [3] (for example, between
socioeconomic status and political preferences). It is a survey of a group of people based on a
formed sample: a subgroup of a given population, which allows for relatively reasonable
conclusions about the entire population as a whole. When analysing the data obtained, various
methods of quantitative measurement are usually used: correlation analysis, regression analysis, etc.
[3].
The main features of an online survey include filling out a web-based questionnaire by
survey participants, the availability of precise instructions, and the ability to demonstrate numerous
incentives to respondents [8]. Online surveys allow you to test photo, video and audio materials. In
general, the online survey toolkit is richer and provides a large number of different features, such as
click-tests, eye-tracking techniques, 3D product modelling, visual scales for measuring emotions,
etc.
Types and types of online surveys
The most complete classification of online surveys to date was proposed by M. Cooper, who,
based on the ideas of L. Kish and R. Groves, classifies web surveys based on the type of sample
(random/nonrandom) [9].
Non-random survey approaches include the following types:Розважальний
опитування/Web surveys as entertainment (opinion polls, "poll of the day").
Self-selected Web surveys (publishing an open invitation to a survey)
Surveys based on a panel of volunteers / Volunteer panels of Internet users (formation of a
volunteer database with subsequent sample formation and invitation to the survey)
Random (probability) survey approaches include the following types:
Intercept surveys (survey of the website audience, control of repeat participation)
List-based samples of high-coverage populations (inviting selected respondents to
participate, for groups widely accessible via the Internet, e.g. students)
Mixed-mode designs with choice of completion method.
Pre-recruited panels of Internet users (randomly selected from a panel based on IP address,
personal number, etc.)
Probability samples of full population (random selection not via the Internet, ensuring
technical feasibility for all those selected in the online environment in the future, using a
panel design) [10].
Advantages and disadvantages of online surveys
Online surveys have advantages over other indirect methods, such as telephone surveys, in
terms of higher respondent willingness to participate and lower cost per interview. Online surveys
also increase the level of participant engagement by incorporating visual, audio and textual
perception. Online surveys allow for a choice of convenient time and place of participation and can
be completed at any time convenient for the respondent[11].
In general, the following advantages of online surveys can be identified:
saving resources (not only money, but also time and labour);
large sample size;
speed of the survey (it is possible to interview several thousand people in a short time);
the possibility of prompt response (e.g., change of instruments);
breadth of coverage (overcoming borders and distances, access to different social groups and
communities);
accessibility (the ability to survey those who are inaccessible in real life, such as
marginalised groups);
targeting (possibility to build a specific sample);
relevance (independence) of communication, i.e. lower level of influence of the interviewer
on the respondent, the ability to give more detailed answers;
high level of trust (due to the anonymity of the online environment);
breadth of coverage of subject areas (the ability to study sensitive and closed for public
discussion topics);
organisational flexibility (the respondent chooses the time and place of participation);
Strict logic of the survey (special software allows to exclude traditional mistakes in filling in
the questionnaire);
operational control over the process of filling out the questionnaire (for example, identifying
logical contradictions in the answers and correcting them).
In addition, online surveys provide additional opportunities (in addition to the ability to
choose a channel of influence). These are the possibilities of further communication with
respondents, automatic collection of additional information, automatic data recording and
processing of questionnaires [12].
The main disadvantage of online surveys is related to the problem of ensuring sample
representativeness. First, it is the lack of a sampling frame. This problem can be successfully solved
in studies of organisations with wide network bases, as well as when building a sample based on the
results of a survey. Secondly, the problem of coverage, i.e. the inability of the sampling procedure
used to cover the real population (i.e. to set a known non-zero probability of each unit of the
population being sampled). And third, there are non-responses or refusals to participate. Usually, the
first two can be successfully resolved[13].
The main disadvantages and limitations of online surveys include the following:
lack of representativeness (the structure of the population does not coincide with the
structure of users);
spontaneous sampling ("self-selection method");
audience coverage may not correspond to the target audience (e.g., limited to visitors of one
website);
mobility, variability of the social space on the Internet (high "mortality" of the panel, for
example);
repeated participation in the survey (especially in anonymous surveys);
lack of data on the general population (e.g., the structure of the audience of a portal or
forum);
intentional distortion of data;
the possibility of hostile actions ("hacking" of the software);
limited length of the questionnaire (in practice, no more than 20-25 questions);
limited control over the process of filling out the questionnaire, time of filling out and
number of corrections in the answers (important when using some methods);
communication problems (misinterpretation of questionnaire questions, errors in transitions,
filling in tables, etc;)
individual system parameters (influence of the software installed on the respondent's
computer).
Some of these shortcomings may be addressed in the near or distant future. For some
surveys, these limitations are critical, while in others they can be ignored [12].
Today, online panels are the leading method of data collection in the field of online surveys.
An online panel is a database of selected respondents who have agreed to participate regularly in
future online surveys[14]. Participants register on special online portals, answering a series of
questions that usually reveal their socio-demographic profile and consumer preferences[15].
Subsequently, this information is used to target online surveys to the desired target group. Typically,
panelists receive monetary remuneration or prizes for participating in the study.
Depending on the nature of the participants, panels can be of the following types: consumer,
business (B2B), specialised (panels with car owners, healthcare professionals, etc.). A separate type
of panel is a panel that operates on the principle of an online community. On this type of panel,
participants can communicate with each other.
There are also probabilistic and non-probabilistic online panels. In this case, the basis for the
division is the way the sample is formed. A non-probabilistic panel is recruited by the user's
independent decision to become a member of the panel. This method of panel formation may cause
distrust in the quality of the data[16]. In this situation, it is impossible to determine the general
population, which has a bad effect on validity. A probability panel is formed in accordance with the
basic principles of random sampling. Regardless of the specific method of recruiting respondents,
the main requirement is that all members of the group of interest should have an equal opportunity
to join the panel [14]. Recently, the multi-source model has been gaining popularity, when the
sample is formed not only on the basis of the panel itself, but also on the basis of competing panels,
social networks and invitations sent out on various websites[14]. One of the ways to solve the
problem of data validity is to create a nationwide online panel based only on probability
sampling[17].
1.2 Psychological testing
Psychological testing is a process used to identify and measure individual psychological
differences between people. This process is also known as psychodiagnostic examination and has
important applications in many areas, including career guidance, professional selection,
psychological counselling, correctional planning and research.
A psychological test is a standardised technique aimed at measuring various aspects of
personality, such as psychophysiological and personality characteristics, abilities, knowledge, skills
and emotional states. These techniques have a number of key characteristics that make them reliable
and effective for psychological assessment.
The procedure of psychological testing is regulated within the framework of psychological
diagnostics, which includes rules for organising and conducting tests, selecting appropriate
techniques and interpreting the results.
Successful application of psychological testing requires high qualification of specialists
conducting testing and careful preparation and planning of the assessment process.
a standardised set of questions or other types of tasks;
one or more measurement scales that allow for quantitative expression of results;
the relationship of each answer to each item to one or more measurement scales (i.e., "test
keys")
standardised test procedure, including unambiguous (standard) instructions for the test taker,
rules for using auxiliary information, rules for completing or stopping the test, etc;
the possibility of automatic (without human intervention) processing of results, i.e. a
formalised procedure for scoring scales using weighting coefficients (keys);
test norms - fixed limits for converting test scores into evaluation categories;
a formalised model of interpretation of results and/or recommendations for making certain
decisions related to certain intervals of values on the scale(s) and combinations of scale
values (if there are two or more scales);
focus on individual quantitative assessment of any characteristic of one person (rather than a
group, team, etc.).
The quality of a test is ensured by a multi-stage process of validation and standardisation of
its scales.
The concept of "psychological testing"
With the advent of the first tests, the term "psychological test" has become the most
commonly used term for measuring an individual's psychological characteristics. Initially, the term
"psychological testing" was used broadly to cover any dimension of psychological science. With the
development of testing, the scope of psychological testing was narrowed to the measurement of
personality and cognitive characteristics. The term "psychological diagnosis" was first used in 1921
by Rorschach, who named the examination process after the "inkblot test" he created, later called the
Rorschach test. Later, the term "psychodiagnostics" was used as a synonym for "psychological
testing", gradually replacing its use.
The emergence of the concept of "psychodiagnostics" is associated with the development of
projective techniques that reveal the overall picture of the personality and the crisis of psychological
testing, which "breaks" the individual functions of the subject. At that time, a significant theory of
projective methods developed by psychoanalysts began to be used. The concept of
"psychodiagnostics" has long been identified with projective testing and was used in the works of
German and Swiss psychologists.
In the American scientific literature, the concept of "psychological testing" continued to
develop until the 1970s and referred to everything related to the development and application of any
psychological test. During this period, a large number of studies on the history of psychological
testing were published, and the term "psychodiagnostics" fell out of use.
This situation demonstrates an interesting phenomenon related to the development of the
field of research on the measurement of individual psychological differences, which has no
appropriate name, since the term "psychological testing" is more suitable for the process of test
application than for the scientific field.
In the 1970s, in the field of individual differences in countries such as Western Europe and
the United States, the term psychological assessment was increasingly used instead of the term
psychological test.
Characteristics of psychological testing
Standardisation - testing methods are standardised, as a result of which the data obtained
must comply with the laws of normal distribution or socio-cultural norms. Based on the
specification, a range of values is formed, which tells us how strongly the characteristic under study
is expressed.
Reliability is the property of a test that produces similar results after repeated measurements.
Reliable methods will give the same results regardless of the time of year or the gender of the
experimenter, and the methodology itself should minimise the influence of such background factors,
as its reliability is certain.
Validity is the consistency of test results with the expected measured properties. There is a
difference between internal and external validity. In external situations, this correspondence can be
verified through positive correlation, with the results of objective achievements, intelligence tests,
and also compared with academic performance. In the internal case, everything is more complicated,
and here we are talking about theoretical connections, about how the built model actually simulates
the aspects outlined. But it becomes easier if a similar "proven" method already exists, then you can
do it without reference to a known method.
1.3 Principles and trends in web development
Web development is the process of creating a website or web application. The main stages of
the process are web design, page layout, client and server-side programming, and web server
configuration.
Stages of website development
Today there are several stages of website development:
Designing a website or web application (collecting and analysing requirements,
developing technical specifications, designing interfaces);
Development of a creative website concept;
creation of a website design concept;
creation of page layouts;
creation of multimedia objects;
Layout of pages and templates;
Programming (development of functional tools) or integration into a content
management system (CMS);
Optimisation and placement of website materials;
Testing and making adjustments;
Publishing the project on the hosting;
Maintenance of a working website or its software basis.
Depending on the current task, some of the stages may be missing.
Creating a technical task
The terms of reference can be prepared by a designer, analyst, web architect, and project
manager together or separately.
The work with the client begins with filling out a brief, in which the client outlines his
wishes regarding the visual presentation and structure of the site, points out errors in the old version
of the site, and provides examples of competitors' sites. Based on the brief, the manager draws up a
technical task, taking into account the capabilities of software and design tools. The stage is
completed after the customer approves the terms of reference. It is important to note that the stages
of website design depend on many factors, such as the size of the site, functionality, tasks that the
future resource should perform, and much more. Nevertheless, there are several stages that are
necessarily present in the planning of any project. As a result, the document that describes the terms
of reference may include the following main sections:
Goals and purpose of the website.
Website audience.
Technical specifications.
Site content (site structure with a detailed description of the elements and functions of
each page).
Interactive elements and services (feedback forms, site search, site forum).
Forms (email, newsletter subscription, feedback).
Content management system.
Requirements for materials.
Transfer to hosting.
Design of the main and typical pages of the site
Design work usually starts in a graphic editor. Designers create one or several designs based
on the specifications. The design of the main page and the design of typical pages (for example,
articles, news, product catalogues) are created separately. In fact, a "page design" is a graphic file, a
drawing consisting of minimal layers of images of common elements of a drawing.
At the same time, designers must take into account the limitations of the HTML standard
(and not create projects that cannot be implemented using standard HTML tools). Flash memory
design is an exception.
The project manager determines the number of sketches and the order in which they are
presented. Project managers also keep track of deadlines. In large web studios, art directors are
involved to control the quality of graphics. This stage also ends with the client's approval of the
sketch.
The website development process has several key stages:
HTML coding: after receiving the approved design, the HTML coder cuts the graphic layout
into individual images, which are then used to create HTML pages. This creates code that can be
viewed in a browser. Typical pages are used as templates for other parts of the site.
Programming: the finished HTML files are passed to a programmer who can develop the site
from scratch or using a CMS (content management system). In case of using a CMS, the
programmer (or a CMS specialist) replaces the standard template with an original one created on the
basis of the original web design. In the process of programming, checkpoints are defined to track
deadlines.
Testing: website testing includes various checks, such as displaying pages at different
browser window sizes, the presence or absence of a flash player, usability testing, etc. The detected
errors are submitted for correction. The project manager monitors compliance with deadlines. At
this stage, a designer may be involved for author's supervision.
Placing the site on the Internet: the site files are placed on the hosting provider's server and
the necessary settings are made. At this stage, the site is still closed to visitors.
Content filling and publishing: the site is filled with content, such as texts, images, and
downloadable files. Depending on the terms of the project, the content can be created either by the
studio's specialists or by a responsible person from the client's side. This stage often takes place in
parallel with other project stages.
At each stage of development, it is important to monitor the quality of work and compliance
with deadlines, which is ensured by coordinating the work of the entire project team.
Standard text blocks include:
website header;
footer of the site;
navigation chain, or "breadcrumbs".
Website optimisation from the inside
This process includes changes directly on the website itself. The first step in optimisation is
to create a semantic core, which identifies keywords that attract the target audience and make it
easier to beat competitors. These keywords are then integrated into the website, and text, hyperlinks,
and other tags are optimised for better detection by search engines.
External website optimisation
Off-page optimisation is mainly about building a network of inbound links, which is a key
component of website promotion. It is not directly related to website development. Optimisation is
divided into "white" and "black", where the latter can quickly bring a website to the top, but also
quickly lead to blocking by search engines. On the other hand, white hat optimisation is a long and
costly process, the cost of which can be much higher than the cost of website development.
Completion of the project
The client or his representative inspects the completed project. If everything meets the
requirements, they sign the project completion documents. At this stage, the client's representative is
also trained on how to manage the administrative part of the website.
Web development tools
Also known as developer tools or elements for testing, these tools allow developers to test
and tweak code. They are different from website builders and integrated development environments
because they are not used to create web pages directly, but rather to test and tweak the user interface
of websites or web applications. Web development tools come as browser add-ons or built-in
features in modern web browsers.[1] Browsers such as Google Chrome,[2] Firefox,[3] Internet
Explorer, Safari,[4] Microsoft Edge[5] and Opera[6] have built-in tools to assist web developers[7]
and many additional add-ons can be found in their respective plugin download centres.
Using web development tools
These tools help developers to work efficiently with web technologies such as HTML, CSS,
DOM, and JavaScript that are processed by browsers. With the increasing demands on the
functionality of web browsers [8], leading browsers have introduced additional features for
developers [9].
Support for developer tools by web browsers
Many popular web browsers have built-in developer tools that allow designers and
developers to view and analyse the structure of their web pages without the need to install additional
plug-ins or settings.
- Firefox: pressing F12 opens the web console or browser console, starting with version 4
[10][11]. The web console works for a single tab, while the browser console works for
the entire browser [12]. Numerous add-ons are also available, including Firebug.
- Google Chrome: Chrome Developer Tools (DevTools) [13].
- Internet Explorer and Microsoft Edge: The F12 key opens the web developer tools
starting with version 8 [14][15].
- Opera: Opera Dragonfly [16] [could not be tested].
- Safari: Safari web development tools available since version 3 [17].
Open source repositories and GitHub
Open source tools often use platforms such as GitHub for issue tracking, bug reporting, and
community collaboration. Developers can raise issues, add code, and interact directly with the
development team. GitHub repositories serve as centralised hubs for code base management and
community communication.
The most common features of web development tools
Usually, to access the built-in web developer tools in your browser, you can use the "Inspect
Element" option by selecting it from the context menu on a web page element. In addition, the F12
key is often used as a shortcut to these tools [18].
HTML and DOM in web development tools
HTML and DOM viewers and editors are often integrated into web development tools. They
differ from the standard source code viewer in browsers, as they allow you to not only view but also
make changes to HTML and DOM, and immediately see these changes reflected on the web page
[19].
These tools typically include the ability to view and edit HTML elements, as well as display
the properties of DOM objects, such as visual dimensions and CSS properties [20]. In Firefox
versions 11 to 46, a unique 3D page inspector was available that used WebGL to visualise the
hierarchy of elements in three dimensions [21][22].
Web page assets, resources, and network information
Web pages depend on a variety of content such as images, scripts, fonts, and other external
files. Web development tools allow developers to inspect these assets by displaying them in a
structured tree, and they also allow for real-time CSS style checking [23][24].
Developers can also use these tools to view network information such as load times,
bandwidth usage, and what HTTP headers are being sent and received [25].
Profiling and auditing
Profiling features allow developers to analyse the performance of web pages or applications,
which allows them to improve the performance of scripts. Auditing features help identify
opportunities for optimisation, such as reducing load times or improving page responsiveness. These
tools also allow you to record the time it takes to render a page, memory usage, and the types of
events that occur [26][27].
Debugging JavaScript
JavaScript is the basis for many web browsers. Web development tools usually include a
JavaScript debugging panel, allowing you to set breakpoints, view the call stack, and control code
execution. The included JavaScript console allows developers to enter JavaScript commands, call
functions, and view script execution errors [29][30][31].
CHAPTER 2. DESIGNING A PSYCHOLOGICAL TESTING SYSTEM
2.1 Software life cycle and architecture
Navigating the complex landscape of software development demands a structured approach
that ensures efficiency and coherence throughout the project lifecycle. At the heart of this
methodology lies the waterfall model, a seminal framework that divides the development process
into sequential phases, each building upon the achievements of its predecessor.
Originating from industries like engineering and construction, where meticulous planning
and adherence to predefined stages are paramount, the waterfall model embodies a top-down
progression through distinct phases. These phases, including concept, initiation, analysis, design,
development, testing, implementation, and support, delineate a clear path from project inception to
completion.
The roots of the waterfall model trace back to the pioneering work of Felix Torres and
Herbert D. Benington, who outlined the use of structured stages in software development as early as
1956. However, it was Winston W. Royce's seminal article in 1970 that formalized the concept,
presenting the first detailed diagram of the process that would later be dubbed the "waterfall model."
The waterfall model is a division of the development process into successive phases, where
each phase involves a specific set of tasks and builds on the results of the previous phase. This
approach is typical for certain areas of engineering design. In software patching, it is typically used
as a less iterative and more rigid method, as the process moves in one direction (top-down), through
successive phases that include concept, initiation, analysis, design, development, testing,
implementation, and support.
The waterfall model had its origins in industry and construction, where the structured
physical environment resulted in high costs for design changes in the early stages of development.
When this approach was applied to software development, it was the only approach for knowledge-
based creative work.
The first detailed presentation describing the use of such stages in software development was
given by Felix Torres and Herbert D. Benington at a symposium on advanced programming
techniques for digital computers in 1956.
Although the term "waterfall" was not used in this article, the first formal detailed diagram of
the process, later known as the "waterfall model", is generally attributed to a 1970 article by
Winston W. Royce. In 1985, the United States Department of Defence reflected this approach in its
DOD-STD-2167A standard for working with contractors on software development, requiring six
phases: software requirements analysis, preliminary design, detailed design, coding and unit testing,
integration, and testing.
The system and software requirements are listed in the product requirements document.
During analysis, models, diagrams, and business rules are created.
During design, the software architecture is created.
Coding includes the development, verification and integration of software.
Testing is the systematic detection and correction of defects.
Operations includes the installation, migration, support and maintenance of complete
systems.
The waterfall model therefore emphasises that the transition to the next phase should only
take place after the previous phase has been tested and reviewed.
There are various modified versions of the waterfall model, including Royce's definitive
model, which may include minor or major changes to this process. These variations may include
returning to a previous phase after identifying deficiencies or repeating the design phase completely
if subsequent phases have not been successful.
An important argument in favour of the waterfall model is that it emphasises the importance
of documentation, such as requirements and design documents, as well as source code. In less
structured and well-documented methodologies, information is lost if team members leave the
project before it is completed. The waterfall model provides an easy way for new team members to
become familiar with the project or for team changes, as all the necessary documents are available.
The waterfall model provides a systematic approach as it moves linearly through distinct,
easy-to-understand and explain phases, providing clear milestones in the development. This
structure makes it easy to understand. The waterfall model also provides clear breakpoints in the
development process, making it easier to track progress. Because of this, it is often used as a starting
point for understanding development models in many software engineering textbooks and courses.
Criticism
Customers may not have well-defined requirements for the product before they see it in
action, and this can cause changes in requirements. This can lead to the need for redesign, re-
development and retesting, which leads to additional costs.
Developers may not foresee future difficulties when creating a new software product or
feature. In such cases, it would be better to revise the project rather than insist on delivering a
project that does not take into account the identified constraints, requirements or problems.
Some organisations try to address the lack of specific requirements from customers by
engaging systems analysts to study existing manual systems and analyse their functionality.
However, in practice, it can be difficult to maintain a strict separation between system
analysis and programming, as the implementation of a complex system almost always leads
to the discovery of problems and edge cases that the system analyst did not foresee.
In response to these problems, modified waterfall models have been proposed, such as the
Sashimi model, the overlapping phases model, the subprojects model, and the risk reduction model.
Modified waterfall models respond to the criticism of the "pure" waterfall model by introducing
changes to the software development process to reduce costs and facilitate change management.
Software architecture refers to the fundamental structures of a software system and the
discipline of creating such structures and systems. Each structure contains software elements, the
relationships between them, and the properties of both elements and relationships."[1][2]
The software system architecture is a metaphor similar to the architecture of a building.[3] It
functions as a blueprint for the evolving system and project that project management can later use to
extrapolate the tasks to be performed by the teams and people involved.
Software architecture is about making fundamental structural choices that are expensive to
change after implementation. Software architecture choices include specific structural options from
the possibilities in the software design.
For example, the systems that controlled the Space Shuttle had to be very fast and very
reliable. Therefore, an appropriate real-time computing language had to be chosen. In addition, to
meet the need for reliability, you can select multiple backup and independently created copies of the
application and run these copies on independent hardware while cross-checking the results.
Documenting the software architecture facilitates communication between stakeholders,
captures early high-level design decisions, and allows reuse of design components between
projects."[4]
Opinions differ on the scope of software architecture: [5]
Macroscopic system structure: This refers to the architecture as a higher-level software
abstraction that consists of a set of computing components together with connectors that
describe the interactions between these components.
The important thing is whatever it is: it refers to the fact that software architects should care
about those decisions that have a large impact on the system and its stakeholders.[7]
Something that is fundamental to understanding a system in its environment[8].
Things that people perceive as difficult to change: Since architecture design occurs early in
the life cycle of a software system, the architect should focus on decisions that "should" be
right the first time. By following this line of thinking, architectural design problems can
become non-architectural once their irreversibility is overcome."[7]
Architectural Design Decision Set: A software architecture should not be viewed simply as a
set of models or structures, but should include the decisions that lead to those particular
structures and the rationale behind them.[9] This insight has led to serious research into
software architecture knowledge management.[10]
There is no clear distinction between software architecture and requirements design and
development (see related boxes below). They are all part of a "chain of intentionality" from high-
level intentions to low-level details.
In software development, a tiered architecture (often referred to as an n-tier architecture) is a
client-server architecture in which the functions of representing, processing applications, and
managing data are physically separated. The most common use of a tiered architecture is a three-tier
architecture.
A tiered application architecture provides a model by which developers can create flexible,
reusable applications. By dividing an application into tiers, developers are able to change or add a
specific layer instead of redesigning the entire application. A three-tier architecture typically
consists of a presentation tier, a logic tier, and a data tier.
Although the concepts of tier and layer are often used interchangeably, one fairly common
view is that there is indeed a difference. This view argues that the tier is the logical structural
mechanism for the conceptual elements that make up the software solution, while the layer is the
physical structural mechanism for the hardware elements that make up the system infrastructure.[1]
[2] For example, a three-tier solution can easily be deployed on a single tier, such as in the case of
an extreme database-centric architecture called an RDBMS-only architecture [3], or on a personal
workstation [4].
In software development, a monolithic application describes a single-tier software program
in which the user interface and data access code are combined into a single application from a single
platform.
A monolithic application is self-contained and independent of other computing applications.
The design philosophy is that a program is not only responsible for a specific task, but can perform
every step necessary to perform a particular function.[1] Today, some personal finance programs are
monolithic in the sense that they help the user complete a complete task, and are private data stores
rather than parts of a larger system of programs that work together. Some word processors are
monolithic programs.[2] These programs are sometimes associated with mainframes.
In software development, a monolithic application describes a software program designed as
a single service. Multiple services may be desirable in certain scenarios because it can facilitate
maintenance by allowing parts of the application to be repaired or replaced without the need for
wholesale replacement.
Modularity is achieved to varying degrees through different modularity approaches. Code-
based modularity allows developers to reuse and repair parts of an application, but requires
development tools to perform these maintenance functions (e.g., the application may need to be
recompiled). Object-based modularity provides an application as a set of separate executable files
that can be independently maintained and replaced without redeploying the entire application (e.g.,
Microsoft "dll" files; Sun/UNIX "shared object" files). [Some messaging capability objects allow
object-based applications to be distributed across multiple computers (e.g., Microsoft COM+).
Service-oriented architectures use special communication standards/protocols to communicate
between modules.
In its original use, the term "monolithic" described huge mainframe applications without
modularity. This, coupled with the rapid growth of computing power and thus the rapid growth in
the complexity of problems that could be solved by software, led to systems that were not
maintainable and a "software crisis".
The architectural pattern "Layers" is described in various publications.
Common layers
In the logical multi-tiered architecture of an information system with an object-oriented
design, the most common are the following four:
Presentation layer (aka user interface layer, view layer, presentation layer in a multi-tier
architecture)
Application layer (also known as service layer[6][7] or GRASP Controller Layer[8])
Business layer (aka business logic layer (BLL), domain logic layer)
Data Access Layer (aka persistence, logging, networking, and other services required to
support a particular business tier)
The book Domain-Driven Design describes some common uses of the above four layers,
although it mainly focuses on the domain layer.
If there is no clear distinction between the business tier and the presentation tier in the
application architecture (i.e. the presentation tier is considered part of the business tier), then the
traditional client-server (two-tier) model is implemented.
A more common convention is that the application tier (or service tier) is considered a sub-
tier of the business tier, usually encapsulating an API definition that reflects the business
functionality supported. The application/business tiers can actually be further separated to
emphasise additional sub-tiers of separate responsibility. For example, if the model-view-
presentation pattern is used, the presentation sub-tier can be used as an additional layer between the
user interface layer and the business/application layer (as represented by the model sub-tier).
[citation needed]
Some also define a separate layer, called the business infrastructure (BI) layer, located
between the business layer(s) and the infrastructure layer(s). It is also sometimes called the "low-
level business layer" or "business services layer". This layer is very generic and can be used across
multiple application layers (e.g. in a currency converter).[10]
The infrastructure layer can be divided into different tiers (high-level or low-level technical
services).[10] Developers often focus on the persistence (data access) capabilities of the
infrastructure layer and therefore only talk about the storage layer or data access layer (instead of the
infrastructure layer or technical services layer). In other words, another type of technical service is
not always explicitly considered as part of any particular layer.
A layer is on top of another because it depends on it. Each layer can exist without the layers
above it and needs the layers below it to function. Another common view is that layers are not
always strictly dependent only on the neighbouring layer below. For example, in a relaxed layered
system (as opposed to a strict layered system), a layer can also depend on all the layers below it.
Three-tier architecture
The three-tier architecture is a model of client-server software architecture in which the user
interface (presentation), functional process logic ("business rules"), computer storage, and data
access are developed and maintained as independent modules, often on separate platforms. [11] It
was developed by John J. Donovan at the Open Environment Corporation (OEC), a tooling
company he founded in Cambridge, Massachusetts.
In addition to the usual advantages of modular software with well-defined interfaces, the
three-tier architecture is designed to allow any of the three layers to be updated or replaced
independently in response to changes in requirements or technology. For example, a change in the
operating system at the presentation layer will only affect the user interface code.
Typically, the user interface runs on a desktop or workstation and uses a standard graphical
user interface, functional process logic that may consist of one or more separate modules running on
a workstation or application server, and an RDBMS on a database server or mainframe that contains
the computer logic for storing data. The middle tier itself can be multi-tiered (in this case, the overall
architecture is called an "n-tier architecture").
Presentation layer
This is the highest level of the application. The presentation layer displays information
related to services such as product browsing, purchases, and shopping cart contents. It
communicates with the other layers, which deliver results to the browser/client layer and all other
layers on the network. Simply put, this is the layer that users have direct access to (e.g., a web page
or an operating system GUI).
Application layer (business logic, logic layer, or middleware layer)
The logic layer is extracted from the presentation layer, and as its level, it controls the
functionality of the application by performing detailed processing.
Data layer
The data tier includes the mechanisms for storing data (database servers, shared files, etc.)
and the data access tier, which encapsulates the storage mechanisms and exposes the data. The data
access layer should provide an API to the application layer that provides methods for managing
stored data without exposing or creating dependencies on the storage mechanisms. Avoiding
dependencies on storage mechanisms allows updates or changes to be made without affecting or
even making the application layer clients aware of them. As with decoupling at any level, there are
implementation costs and often performance costs in exchange for improved scalability and ease of
maintenance.
Using web development
In the field of web development, three-tier is often used to refer to websites, usually e-
commerce websites, that are built using three tiers:
A front-end web server that serves static content and possibly some cached dynamic content.
In a web application, the front-end is the content that the browser displays. The content can
be static or dynamically generated.
A middleware application server for processing and generating dynamic content (e.g.
Symfony, Spring, ASP.NET, Django, Rails, Node.js).
An internal database or data warehouse that contains both data sets and the database
management system software that manages and accesses the data.
Other considerations
The transfer of data between tiers is part of the architecture. The protocols involved can
include one or more of SNMP, CORBA, Java RMI, .NET Remoting, Windows Communication
Foundation, sockets, UDP, Web services, or other standard or proprietary protocols. Often,
middleware is used to connect individual layers. The individual tiers often (but not necessarily) run
on separate physical servers, and each tier can run in a cluster by itself.
Traceability.
End-to-end traceability of data flows through n-tier systems is a challenging task that becomes
more important as systems become more complex. Application Response Measurement defines
concepts and APIs for measuring performance and correlation of transactions across tiers. Typically,
the term "tiers" is used to describe the physical distribution of system components on separate
servers, computers, or networks (processing nodes).
A three-tier architecture would then have three processing nodes. The term "tiers" refers to a
logical grouping of components that may or may not be physically located on the same processing
node.
2.2 Designing the system functionality
Crafting the functional framework of a system demands meticulous attention to detail and a
thorough exploration of potential use cases. By delving into these scenarios, designers gain
invaluable insights into the system's purpose and envisaged functional activities, laying the
groundwork for its development and deployment.
At the heart of this exploration lies the use case diagram, a fundamental tool for visualizing
the interactions between users and the system. This diagram serves as a blueprint, delineating
various user interactions and scenarios of system utilization. By encapsulating different user roles
and usage patterns, use case diagrams offer a holistic view of the system's functionality, facilitating
effective communication with stakeholders.
While the intricacies of each use case warrant scrutiny, the use of diagrams streamlines the
process, offering stakeholders a high-level overview of the system's architecture and functionality.
As the adage goes, "a use case is the essence of your system," encapsulating its core functionalities
in a succinct and accessible manner.
When designing the functional load of a system, it is necessary to thoroughly investigate the
use cases, which will provide a deep understanding of the purpose and possible functional activities
of the information system in the future.
The simplest form of visualisation of the interaction between the user and the system is a use
case diagram, which illustrates the user's interaction with different scenarios of using the system.
These diagrams can identify different types of system users and different usage patterns. They are
usually combined with other types of diagrams to get a complete picture.
While each possibility can be looked at in detail, the use of diagrams helps to provide a high-
level overview of the system. These diagrams simplify communication with stakeholders and help
them better understand how the system will be developed. To quote a famous saying, "a use case is
the essence of your system."
Because of their simplicity, use cases can be an effective communication tool for stakeholders.
These depictions attempt to replicate the real world and help stakeholders understand how the
system will be developed. Research has shown that the use of diagrams conveys the intentions of a
system to stakeholders more clearly than class diagrams.
The main purpose of using diagrams is to show the dynamic aspects of the system. For a more
complete functional and technical description of the system, other diagrams and documents can be
used. They provide a simplified graphical representation of how the system should function.
A system boundary is a rectangle with a name and an ellipse (precedent), but may be omitted
if there is no useful information.
Actor is a stylised role of a person that represents a set of users who interact with the system
or other entities. Actors are not related to each other.
Precedent - an ellipse with a caption that indicates a system operation that is performed by
the user interface and leads to certain results. The name can be a description of "what" is
done in the system, not "how". Precedents represent different scenarios of system use.
Figure 2.1 shows a use case diagram that describes the possible actions a user can take in a
system.
Figure 2.1 - Use case diagram
2.3 Design of the internal structure
In the field of software engineering, a class diagram within the Unified Modelling Language
(UML) is a type of structural diagram that statically describes the architecture of a system. It shows
classes within a system, their attributes, methods, and the relationships between objects.
The class diagram is a key element in the object-oriented modelling methodology. It is used
both for general conceptualisation of the structure of software systems and for detailed planning of
the transformation of models into code. Such diagrams can also be used for modelling databases. In
a class diagram, each class is represented as a window with three sections:
- The top section contains the name of the class, which is displayed in bold and centred, with
the first letter of the name capitalised.
- The middle section presents the class attributes, aligned to the left, with the first letter
lowercase.
- The bottom section contains the operations available to the class, also aligned to the left,
with the first letter lowercase.
In the process of system design, classes are identified and grouped to form a class diagram,
which helps to define their static relationships. In detailed modelling, classes from the conceptual
design are often divided into subclasses.
A dependency in UML represents a semantic relationship between model elements, where
changes in one element (server or target) can affect another (client or source). It is represented by a
dashed line with an open arrow pointing from the client to the source and is unidirectional.
To complete the description of the system behaviour, class diagrams can be supported by
state diagrams or a UML state machine.
An association in a diagram represents a group of relationships. A binary association that has
two ends is usually depicted as a line. An association can connect any number of classes. A ternary
association includes three elements. Associations can be denoted by names, and the ends can be
supplemented with role names, ownership, multiplicity, visibility, and other characteristics.
In software engineering, there are four main types of associations: bidirectional,
unidirectional, aggregation (including composite aggregation), and reflexive. Among them,
bidirectional and unidirectional associations are the most common.
Bidirectional association, as in the case of flight class and aircraft class, reflects a reciprocal
relationship between objects of the two classes.
Aggregation represents a "has" relationship and is a more specific form of association that
indicates partial or complete relationships. For example, the relationship between a professor and the
class he teaches is an aggregation. Aggregations, like regular associations, can have names and other
embellishments, but are limited to binary relationships and are not always clearly distinguishable
from associations in implementation.
Aggregation can be used in situations where a class is a collection or container for other
classes, without a strong lifecycle dependency on the container. For example, the relationship
between a student and a library is an aggregation because the student can exist independently of the
library. Graphically, in UML, an aggregation is represented by a hollow diamond on the containing
class, connected to the containing class by a line.
Generalisation in UML is a relationship where one class (subtype) is considered a specialised
version of another class (supertype). This means that an instance of a subtype is also an instance of a
supertype. This hierarchy is found, for example, in biological classification. In UML, a
generalisation is represented as a hollow triangle at the end of a superclass that connects it to one or
more subtypes. This relationship is also known as inheritance. A superclass in a generalisation
relationship can be called a "parent class", and a subtype can be called a "child class" or an
inheritance class.
In software engineering, there are different types of associations, including bidirectional,
unidirectional, aggregation (covering composition aggregation), and reflexive. Bidirectional and
unidirectional associations are the most common. For example, a bidirectional association between a
flight class and an aircraft class shows the relationship between these objects.
Aggregation, which is a type of "has" relationship, represents a partially complete
relationship. It can be illustrated by the relationship between a professor and a class. Graphically, in
UML, this is depicted as a hollow rhombus connecting the containing and contained classes.
The generalisation in UML shows a relationship where one class is a specialised form of
another. It is important to note that this relationship is not related to the biological relationship
between parents and children. Graphically, this is represented as a hollow triangle at the end of a
superclass connected to subtypes.
An implementation in UML describes a relationship between two elements, where one (the
client) implements the behaviour specified by the other (the supplier). This is depicted by a hollow
triangle at the end of the interface, connected by a dashed line to the implementers. The ball and
socket symbolism is used in component diagrams.
Dependency in UML indicates a weak relationship between classes, when one class depends
on another at a certain point in time. This can be implemented through member function arguments
or local variables.
An association in UML is represented by a line connecting two classes, with the ability to
add arrowheads, ownership, roles, and multiple instances at each end.
Entity classes model long-term information and associated behaviour. They do not
necessarily correspond to tables in databases. In UML diagrams, they can be represented as circles
with a short line attached or as regular classes with the "entity" stereotype above the class name
Figure 2.2 - Class diagram
2.4 Database design
At the current stage of the project, the SQLite database is used, but the system is designed in
such a way that with the slightest changes, you can switch to not any database that will be on the
server. The database consists of two entities, which are shown in Figure 2.3.
Figure 2.3 - Database structure
SECTION 3. DEVELOPMENT AND TESTING OF A SOFTWARE PRODUCT
3.1 Choosing development tools
The process of selecting development tools is a critical step in laying the foundation for any
software project. In this context, Python emerges as a compelling choice due to its versatility,
readability, and extensive capabilities. Python, a high-level general-purpose programming language,
leverages indentation to enhance code readability, facilitating the creation of logical and organized
codebases across projects of varying complexity.
One of Python's defining features is its support for multiple programming paradigms,
including structural, object-oriented, and functional programming. This flexibility empowers
developers to adopt diverse coding approaches tailored to the specific requirements of their projects.
Moreover, Python boasts a comprehensive standard library often described as a "battery-powered
language," providing developers with a rich set of tools and modules to streamline development
processes.
Originating in the late 1980s as a successor to the ABC programming language, Python has
undergone significant evolution under the stewardship of its creator, Guido van Rossum. The
language's journey from its initial release as Python 0.9.0 in 1991 to its final version 2.7.18 in 2020
has been marked by continuous innovation and refinement. Python 2.0, introduced in 2000, brought
pivotal advancements such as a garbage collection system and Unicode support, laying the
groundwork for subsequent developments.
However, Python's transformative milestone arrived with the release of Python 3.0 in 2008.
Representing a major revision of the language, Python 3.0 introduced fundamental changes that
were not fully backward compatible with previous versions. Despite initial challenges with code
migration, Python 3.0 ushered in a new era of enhanced functionality and improved language
design. The introduction of utilities like 2to3 facilitated the transition, albeit partially, by automating
the conversion of Python 2 code to Python 3.
Python is a high-level general-purpose programming language that uses extensive
indentation to improve code readability. Its language structure and object-oriented approach make it
easy to write logical code for projects of any size.
Python is a dynamic language that supports various programming paradigms, including
structural, object-oriented, and functional programming. It has a complete standard library, often
referred to as a "battery-powered language".
Python was created by Guido van Rossum in the late 1980s as a successor to the ABC
programming language. The first version of the language, Python 0.9.0, was released in 1991.
Python 2.0, which was released in 2000, introduced new features such as list comprehension and a
garbage collection system with reference counting. It was finished with version 2.7.18 in 2020.
Python 3.0, released in 2008, is a major version of the language. It is not fully backward compatible
with previous versions, and much code written for Python 2 will not work on Python 3.
Python has always been one of the most popular programming languages. Its creation and
development was led by Guido van Rossum until 2018, when he stepped down from leadership,
calling himself the "friendly dictator for eternity" of Python. His contributions have been recognised
by the Python community and he remains a member of the five-person board of governors. Since
then, the five-person committee has included Brett Cannon, Nick Coghlan, Barry Warshaw, Carol
Willing, and Guido van Rossum.
Python 2.0, which was released on 16 October 2000, brought with it numerous important
innovations, including a garbage collector for loop detection and support for Unicode.
Python 3.0, released on 3 December 2008, is a major revision of the language that is not fully
backward compatible with previous versions. Many of its main features were carried over from
Python 2.6.x and 2.7.x. The 2to3 utility was introduced, which automatically (though partially)
converts Python 2 code to Python 3.
Initially, the support period for Python 2.7 was set for 2015, but due to concerns about the
complexity of porting existing code to Python 3, it was postponed to 2020. After the end of Python
2's lifecycle, support is limited to Python 3.6.x and later, and no further security improvements or
other updates are planned.
Unfortunately, all versions of Python have had some security issues, including Python 3.9.2
and 3.8.8, which were accelerated to improve performance but also introduced the possibility of
remote code execution and web cache poisoning.
Python is a programming language with different paradigms. It fully supports object-oriented
and structural programming, and some of its features support functional and aspect-oriented
programming, including metaprogramming and magic methods. The language extension also
supports other paradigms such as contract design and logic programming.
Python uses dynamic text input and a combination of reference counting and garbage
collectors. The garbage collector is responsible for managing memory, detecting cyclic references.
In addition, the language has the dynamic ability to bind names to methods and variables at runtime
(late binding).
Python was created to provide support for Lisp-based functional programming. It includes a
number of features such as filtering, display, and reduction, as well as understanding generators,
dictionaries, sets, and expressions. The standard Python library includes two modules, itertools and
functools, which allow implementing functional tools borrowed from Haskell and Standard ML.
The basic principles of the Python language are outlined in the Zen of Python (PEP 20)
document. They include phrases such as "Beautiful things are better than ugly things", "Explicit is
better than implicit", "Simple is better than complex", and "Calculating readability". These
principles define the programming style that Python supports and contribute to its uniqueness.
Python does not include all functions directly in its core, but it is designed to be extensible
through modules. This makes the language particularly popular for creating programmable
interfaces for existing applications. The philosophy behind Python, as expressed by Guido van
Rossum, is that the language has a small core syntax, a large standard library, and the ability to be
easily extended. This is reflected in its rejection of grammar complexity and support for developers
to choose their own coding style. Instead of the principle "There are many ways to do this", Python
strives for one obvious way to simplify and avoid confusion.
Python developers avoid premature optimisation and do not fix non-critical parts of the
CPython implementation in order to maintain readability and slightly improve speed. When high
speed is required, programmers can move critical functions to extensions written in languages such
as C or use the PyPy compiler. Additionally, the Cython tool is available, which converts Python
scripts to C and allows direct calls to C-level APIs to the Python interpreter.
Python's user experience is one of the most important goals for developers. This is reflected
in the language's name, which is a tribute to the British comedy company Monty Python. Also, a
sense of fun is perceived through interesting method names and tutorials, such as the standard foo
and bars.
Python users and fans, especially those with a high level of knowledge or experience, often
identify themselves as Pythonistas. Python promotes readability. It has an attractive visual format
and often uses English keywords, unlike other languages that use punctuation. A distinctive feature
of Python is the absence of curly brackets to separate code blocks, instead spaces are used. You can
also use semicolons after operators, although this is rarely used. Compared to other languages,
Python has fewer grammatical exceptions and special cases, for example, compared to C or Pascal.
Python uses spaces to separate blocks of code instead of curly braces or keywords.
Increasing indentation occurs after certain statements, and decreasing indentation indicates the end
of the current code block. Thus, the visual structure of the program accurately reflects the semantic
structure. This approach is sometimes referred to as an "external" rule, and although some other
languages have similar rules, in Python, indentation has a semantic meaning. The recommended
indentation size is four spaces.
Python offers a wide range of operators, including:
An assignment statement that uses the equal sign (=) to assign a value to a variable.
A conditional if statement that works in conjunction with else and elif to perform
conditional code branching.
A for loop statement that allows you to iterate through objects and execute a block of code
for each element.
A while loop statement that executes a block of code as long as the specified condition is
true.
A try statement that allows you to handle exceptions and execute cleanup code regardless of
whether errors occurred.
The raise statement, which is used to generate exceptions or re-raise a caught exception.
Class statements that define blocks of code and are used in object-oriented programming.
The def statement used to define functions or methods.
The with statement, which is used to control the context in a resource (for example, opening
and closing a file) and ensures that operations are performed correctly.
Break and continue statements used to interrupt a loop or move to the next iteration in a
loop, respectively.
The del statement, which deletes a variable and its value.
The pass statement used to create an empty code block.
The assert statement that checks a given condition during debugging.
The yield statement used to return values from a generator function.
The return statement used to return values from functions.
The import statement, which allows you to import modules, functions, or variables for use in
the current program. There are different ways to use the import statement, depending on the
user's needs.
Python has many operators that allow you to extend the functionality of the language and
provide ease of development.
In Python, the role of the assignment operator (=) is that it allows you to associate a variable
name with a specific object that is dynamically allocated. After that, you can change the variable's
reference to any other object. In Python, the variable name is just a reference, and the data type is
not fixed for this reference. However, at the moment, the variable refers to an object of a certain
type. This is referred to as dynamic typing, which is different from statically typed programming
languages where each variable can only have a certain type of value.
Python doesn't have support for tail call optimisation or first-class extensions, and according
to Guido van Rossum, this will never be added. However, in version 2.5, support for coroutine-like
functions was improved by expanding the capabilities of Python generators. A lazy iterator can have
a maximum of one generator in version 2.5. In version 3.3, the ability to pass information through
several stack levels to a generator function was added, which expands the possibilities of interacting
with generators.
Some Python expressions are similar to C and Java, but there are also unique solutions: -
Addition, subtraction, and multiplication are performed similarly, but division has two types: integer
(//) and floating point (/). The ** operator is also used for exponentiation. - Python 3.5 introduced
the new @infix statement, which is used by libraries such as NumPy to multiply matrices. - Python
3.8 introduced the := syntax, known as the walrus operator. It allows you to assign values to
variables as part of an expression. - In Python, the == comparison is performed by value, similar to
Java, which compares numeric values by value and objects by reference. The is operator is used to
compare object identifiers (reference comparison). In Python, you can also compare values in a
range, for example, the expression a <= b <= c. - Python uses the and, or, not keywords for Boolean
operators, rather than the &&, ||, and ! symbols as in Java and C. In addition, Python has a special
type of expression known as list comprehension, as well as generator expressions. - Anonymous
functions are implemented using lambda expressions, but they are limited by the fact that the body
of the function can only be an expression. - A conditional expression in Python is written as x if the
condition c is true, otherwise it is y. This syntax is similar to many other languages. - Python
distinguishes between lists and tuples. A list is written in square brackets, for example, [1, 2, 3].
Python differs from other programming languages, such as Common Lisp, Scheme, or Ruby,
in its unique approach to distinguishing between expressions and statements. This distinction helps
to avoid duplication of functions, which is one of the characteristic features of Python. This
language is often used in artificial intelligence and machine learning projects, where libraries such
as TensorFlow, Keras, Pytorch, and Scikit-learn are used. Thanks to its modular architecture, simple
syntax, and advanced text processing capabilities, Python is widely used for natural language
processing.
Python is also widely used in a variety of software, including finite element programs such
as Abaqus, 3D parametric models (FreeCAD, 3ds Max, Blender, Cinema 4D, Lightwave, Houdini,
Maya, modo, MotionBuilder, Softimage, Nuke Visual Effects Composer), 2D image processing
programs (GIMP, Inkscape, Scribus, Paint Shop Pro), and others. Esri has recognised Python as the
preferred scripting language for ArcGIS, and it is also used in video game development and is one
of the three programming languages used in Google App Engine along with Java and Go. Python is
a standard component of many operating systems, including most Linux distributions, AmigaOS 4,
FreeBSD, NetBSD, OpenBSD, and macOS, and can be used from the command line. On Linux
systems, installers such as Ubiquity on Ubuntu and Anaconda on Red Hat Linux and Fedora are
often written in Python. Gentoo Linux uses Python in its Portage package management system.
Python is also used in the field of information security and exploit development.
In educational projects, such as Sugar Laptop XO programs developed by Sugar Labs,
Python is the main programming language. The Raspberry Pi project has also adopted Python as the
primary language for users. LibreOffice has integrated Python and plans to replace Java with Python
starting with version 4.0, released on 7 February 2013.
Python is particularly valued by programmers for several key advantages. Firstly, Python is
an easy-to-learn language with a simple and straightforward syntax, making it an ideal choice for
beginners. Secondly, a large number of available third-party libraries and modules, such as NumPy,
Pandas, TensorFlow, make Python a powerful tool for developing complex applications with high
execution speed.
Python is distinguished by its high code readability, which allows programmers to write clear
and understandable code. This characteristic makes it easy to maintain and understand programs,
facilitating collaboration between developers. In addition, support for object-oriented programming
allows you to effectively structure code into classes and objects.
Python has found wide application in various fields, from web development to scientific
computing, artificial intelligence, data analysis, web scraping, and others. This language is one of
the leaders in data analytics and machine learning, thanks to powerful libraries that simplify the
solution of complex tasks.
Python's portability is also an important advantage. It is compatible with a variety of
operating systems, including Windows, macOS, and Linux, allowing you to run Python applications
on different platforms without the need to modify your code.
An active developer community is another important advantage of Python. Numerous
resources, such as forums, blogs, and online courses, help developers get support, share experiences,
and solve problems.
Compared to other programming languages, Python is simple, efficient, and fast to develop.
Its simple syntax and numerous ready-made libraries make it easy to write code quickly and ensure
the efficiency of project implementation. Python is easy to learn and use, allowing developers to
implement projects in a wide range of areas.
All in all, its powerful features and versatility make Python an excellent choice for a variety
of programming tasks, which explains its popularity and widespread use in the programming world.
PyCharm, developed by JetBrains, is an integrated development environment (IDE) that
specialises in the Python programming language. It is a cross-platform solution available for
Windows, macOS, and Linux. PyCharm offers two versions: Community Edition with an Apache
licence, which is free, and Professional Edition, which includes additional features and has its own
licence.
PyCharm's main features include support for auto-completion, syntax highlighting and error
detection, integrated linter, and quick fixes. It provides easy project and code navigation, Python
code refactoring capabilities, and a built-in debugger. PyCharm also supports web frameworks such
as Django, web2py, and Flask (Professional Edition only) and has integrated support for unit testing
with line-by-line code coverage.
PyCharm integrates with various version control systems such as Mercurial, Git, Subversion,
Perforce, and CVS, and has capabilities for developing scientific applications using libraries such as
matplotlib, numpy, and scipy (Professional Edition only). It also supports development for the
Google App Engine Python (Professional Edition only).
PyCharm faces competition from other IDEs aimed at Python development, such as PyDev
for Eclipse and Komodo IDE. With the help of APIs, developers can extend the capabilities of
PyCharm and write their own plugins. In addition, there are a large number of plugins that are
compatible with PyCharm.
The first version of PyCharm was released in July 2010, and since then, several updates and
versions have been released. In October 2013, an open source version of PyCharm - the Community
Edition - became available.
Django is a high-level Python web application development framework that follows the
model-view-controller (MVC) architecture pattern. Developed in 2005, Django is recognised for its
speed, flexibility and security. It provides developers with a set of ready-to-use components for
building reliable and scalable web applications.
Key Features of Django
Fast development: Django was developed with the idea of enabling rapid web application
development. It includes a set of built-in components that cover most web application development
needs.
DRY principle: Django follows the DRY (Don't Repeat Yourself) principle. It promotes code
reuse and ensures that every piece of information has a single, clear place in the code.
MVC architecture: Django uses a model-view-controller architecture pattern that divides an
application into three main parts: the model (data), the view (presentation), and the controller
(business logic).
Security-oriented: Django has strong security mechanisms that protect a web application
from many common attacks such as SQL injection, cross-site scripting, cross-site request forgery,
and others.
Built-in administration: Django provides a ready-made administration panel that allows you
to easily manage application data. This makes content management much easier.
Scaffolding: Django includes an automatic code generation system known as "scaffolding"
that allows you to quickly create models, views, and controllers.
Wide range of libraries and plugins: Django is supported by a large community of
developers, which provides a large number of third-party libraries and extensions.
Developing with Django
Django development has several key stages:
Data modelling: development starts with defining models that represent the data structure in
the database. Django ORM (Object-Relational Mapping) allows you to work with the database using
Python code.
Creating views and templates: Django views are responsible for processing requests from
users and sending responses. Templates provide a flexible system for generating HTML output.
URL configuration: Django allows you to easily configure URLs for each view, providing a
clear and logical URL structure in your web application.
Deployment and testing: Django provides tools for testing and deploying web applications,
ensuring their reliability and availability.
Django community and ecosystem
Django is supported by an active and large community. There are many resources for
learning, including documentation, training courses, forums, and conferences. Developers are
constantly working to improve the framework, adding new features and improving existing ones.
Benefits of using Django
High Performance: Django allows developers to quickly build web applications, reducing the
time from idea to launch.
Scalability: Django is suitable for developing both small and large scalable web applications.
Security: Django provides a high level of security by default, helping to protect your web
application from common attacks.
Flexibility and extensibility: Django is highly flexible, allowing developers to easily
integrate third-party libraries and tools.
Django is a powerful, flexible and secure framework for developing web applications. Its
speed of development, DRY and MVC principles, large community, and rich ecosystem make it an
excellent choice for Python web developers. Django continues to be at the forefront of web
development, making important contributions to modern web technologies.
HTML (HyperText Markup Language) and CSS (Cascading Style Sheets) are fundamental
technologies in web development. HTML is used to structure the content on web pages, while CSS
is responsible for the style and visual design of these pages. Together, they form the basis of most
websites on the Internet, allowing you to create functional and aesthetically pleasing web pages.
HTML: basics and usage
HTML basics: HTML is a markup language that uses tags to define various elements on a
web page, such as headings, paragraphs, images, and links. Each HTML document contains a
structured set of elements that describe the content of the page.
Structure of an HTML document: A typical HTML document consists of a head section,
where metadata is defined, and a body section, where all the displayed content is placed. Metadata
can include the page title, links to CSS files, scripts, etc.
Semantic markup: Modern HTML supports semantic markup, which uses tags to describe the
content and structure of content. Tags such as <article>, <section>, <header>, <footer>, and others
help to create logically organised and accessible web pages.
Forms and data entry: HTML provides form elements that allow users to enter data. This can
be text, selections from a list, checkboxes, etc. The use of forms is key for interactive websites.
CSS: design and styling
The use of CSS: CSS is used to define the style of HTML elements, including colour, font,
padding, element layout, and more. CSS allows you to separate content from design, making it
easier to manage your web design.
Selectors and properties: In CSS, selectors are used to define which HTML elements will be
styled. Properties and their values determine which styles are applied.
Layout and positioning: CSS provides powerful tools for creating web page layouts,
including flexbox and grid, which allow developers to effectively manage the layout of elements.
Responsive design: CSS can adapt the layout of web pages to different screen sizes and
devices using media queries, which is key to developing responsive websites.
Interaction between HTML and CSS
HTML and CSS work together to create the structure and design of web pages. HTML sets
the structure, while CSS defines the style. This interaction allows developers to create more
attractive and user-friendly websites.
Current trends and directions of development
CSS preprocessors: tools such as SASS or LESS allow you to extend the capabilities of CSS
by using variables, functions, and other advanced techniques to create more efficient styles.
HTML5 and CSS3: The new versions of HTML and CSS include many innovations, such as
new semantic elements, animations, transitions, shadows, gradients, and others that make web
design more dynamic and visually appealing.
Web accessibility: Web accessibility is becoming increasingly important as web developers
strive to make their sites accessible to people with various disabilities. This includes the correct use
of HTML tags and CSS styles to support screen readers and other assistive technologies.
Visuals and interactivity: Modern web technologies allow for the creation of sophisticated
visuals and interactive elements that make websites more engaging and engaging.
3.2 Development of a graphical interface
A graphical user interface (GUI) is a form of interface that allows people to interact with
electronic devices through visual elements such as icons and audio cues, as opposed to text-based
interfaces and command lines. GUIs were developed as a response to the difficulties associated with
learning to use the command line interface (CLI), which requires entering commands using a
keyboard.
Interaction in a GUI usually occurs through direct manipulation of graphical components.
GUIs are not limited to computers, but are also used in many portable devices such as MP3 players,
portable media players, game consoles, smartphones, and a variety of home, office, and industrial
control devices. GUIs are not typically used for low-resolution interfaces such as in video games
(where HUDs predominate) or for three-dimensional displays, as GUIs are traditionally associated
with two-dimensional screens.
GUI development is a key aspect of application programming in the field of human-
computer interaction. The main goal is to improve efficiency and usability, according to the
principles of user-centred design. GUI functions are often described as "chrome" or GUI. Users
interact with information by manipulating visual widgets that support the actions needed to achieve
their goals. The Model-View-Controller (MVC) architecture allows you to create flexible structures
in which the interface is not directly tied to the functionality of the application, providing easy
customisation of the GUI.
The functions of the visible graphical user interface of an application are sometimes called
chrome or GUI (pronounced "sticky") [6-8]. Typically, users interact with information by
manipulating visual widgets that allow them to interact according to the type of data they store. The
widgets of a well-designed interface are selected to support the actions required to achieve the users'
goals. The view-controller model allows you to create flexible structures in which the interface is
independent of and indirectly related to the application functions, so the GUI can be easily
customised. This allows users to choose or create a different shell at will and makes it easier for the
designer to change the interface as the user's needs evolve. Good GUI design is more about the user
and less about the system architecture. Large widgets, such as windows, typically provide a frame or
container for the main content of a presentation, such as a web page, email message, or picture.
Smaller ones usually act as a user input tool.
A GUI can be designed to meet the requirements of a vertical market as an application-
specific GUI. Examples include automated teller machines (ATMs), point-of-sale (POS) touch
screens in restaurants [14], self-service checkouts used in retail stores, airline self-service ticketing
and check-in, information kiosks in public places such as a train station or museum, and monitors or
control screens in an embedded industrial application that uses a real-time operating system
(RTOS).
Cellular phones and handheld gaming systems also use a touchscreen GUI for specialised
applications. Newer cars use GUIs in their navigation systems and multimedia centres or
combinations of navigation and multimedia centres.
The GUI consists of five main pages, as shown in Figures 3.1-3.5.
Figure 3.1 - Authorisation page
Figure 3.2 - Registration page
Figure 3.3 - Testing page
Figure 3.4 - Testing page
Figure 3.5 - Test results page
3.3 Development of basic system algorithms
All algorithmic tools of the system are developed within the framework of creating the
views.py file, the contents of which are presented below.
from django.shortcuts import render, redirect
from django.contrib.auth import authenticate, login, logout
from django.contrib import messages
from .forms import CreateUserForm
from .models import questionnaireAnswers, questionnaireData
from django.contrib.auth.models import User
def loginPage(request):
if request.method == 'POST':
username = request.POST.get('username')
password = request.POST.get('password')
user = authenticate(request, username=username,
password=password)
if user is not None:
login(request, user)
return redirect('index')
else:
messages.info(request, "Ім'я користувача не знайдено або
написано некоректно")
context = {}
return render(request, 'login.html', context)
def regist(request):
form = CreateUserForm
if request.method == "POST":
form = CreateUserForm(request.POST)
if form.is_valid():
form.save()
return redirect('login')
context = {'form': form}
return render(request, 'register.html', context)
def logoutUser(request):
logout(request)
return redirect('login')
def index(request):
if request.user.is_authenticated:
questionnaire_list = questionnaireData.objects.all()
return render(request, 'index.html', {'questionnaire_list':
questionnaire_list})
else:
return redirect('login')
def questionnaire_page(request, name):
questionnaireCurr =
questionnaireData.objects.get(questionnaire_name=name)
questName = questionnaireCurr.questionnaire_name
questions = questionnaireCurr.questions_text.split('/')
tempAns = questionnaireCurr.questions_answers.split('/')
tempQA = []
for i in range(len(questions)):
tempQA.append(questions[i] + '*' + tempAns[i])
quest_ans = []
for i in range(len(tempQA)):
quest_ans.append(tempQA[i].split('*'))
answers = []
if request.method == 'POST':
for i in range(1, len(quest_ans) + 1):
answers.append(request.POST.get('quest' + str(i)))
addAnswersToDB(answers, questName, request)
return render(request, 'result.html', {'result':
analyze_test_responses(questName, answers)})
return render(request, 'questionnaire_page.html', {'questName':
questName, 'quest_ans': quest_ans})
def analyze_test_responses(test_type, responses):
"""
Analyses the answers to a psychological test, taking into
account the type of test.
Answers should be in the format of a list of 10 items, where
each item is "Yes", "No" or "Hard to say".
"""
# Checking if the list contains 10 answers
if len(responses) != 10:
return " Error: provide exactly 10 answers."
# Counting the answers
yes_count = responses.count("Yes")
no_count = responses.count("No")
unsure_count = responses.count("Difficult to answer")
# Analysis of answers depending on the type of test
if test_type == "Stress resistance test":
if yes_count >= 7:
return "You have shown a high level of stress tolerance,
which indicates your ability to deal effectively with stressful
situations. Not only do you remain calm and focused during difficulties,
but you are also able to quickly adapt to new circumstances and find
constructive ways to solve problems. This is an important quality that
helps you avoid burnout and maintain your mental health."
elif no_count >= 7:
return "Based on your answers, we can conclude that you
often find it difficult to cope with stress. This can have a negative
impact on your mental health and overall level of well-being. It is
important to pay attention to your emotional state and develop strategies
to manage stress effectively, such as meditation, exercise, and
counselling."
else:
return "Your answers indicate that you have an average
level of stress tolerance. While you are able to cope with stress in most
situations, certain circumstances can make you feel overwhelmed. It is
important to develop self-regulation strategies and learn to find ways to
relax and recover to increase your ability to cope with stress."
elif test_type == "Тест на емоційний інтелект":
if yes_count >= 7:
return "You have demonstrated a high level of emotional
intelligence, which indicates your ability to effectively recognise,
understand and manage your own emotions and those of others. Your
empathy, self-awareness, and social skills contribute to positive
communication and relationships with others. These are important
qualities that contribute to personal growth and success in social
interactions."
elif no_count >= 7:
return "Based on your responses, we can conclude that
you may have difficulty understanding and managing emotions, both your
own and those of others. This can create difficulties in interacting with
others and managing personal reactions to different situations. It is
important to work on developing emotional awareness, empathy and
effective expression skills."
else:
return "Your answers indicate an average level of
emotional intelligence. You can understand and express your emotions, but
you may sometimes have difficulty managing your emotions or fully
understanding the emotions of others. Developing emotional self-
regulation skills and improving your empathy can increase your ability to
interact effectively in social situations."
elif test_type == " Communication skills test":
if yes_count >= 7:
return "Your answers indicate a high level of
communication skills. You listen effectively to others and express your
thoughts and ideas clearly. Your ability to communicate clearly and your
openness to dialogue contribute to positive interactions with others. You
are able to build rapport, resolve conflicts effectively and maintain
healthy relationships, which are key aspects of successful
communication."
elif no_count >= 7:
return "Based on your answers, it may be that you find
it difficult to communicate with others. You may feel awkward or unsure
about talking, especially with strangers or in group discussions. It is
important to work on developing your communication skills, such as
listening, expressing yourself clearly and confidently, and interacting
effectively in different social contexts."
else:
return "According to your answers, you have an average
level of communication skills. You can carry on a conversation and
express yourself, but you may sometimes feel unsure or have difficulty
communicating in certain situations. Developing active listening skills,
improving your ability to express your feelings and opinions, and
practising public speaking can increase your confidence and effectiveness
in communication."
elif test_type == " Creativity test":
if yes_count >= 7:
return "Your answers show that you have a high level of
creativity. You have a unique ability to think outside the box, generate
original ideas and solve problems in creative ways. Your ability to
innovate and openness to new experiences allows you to approach tasks and
situations with new perspectives, which are valuable qualities in many
aspects of life and work."
elif no_count >= 7:
return "Based on your answers, we can conclude that you
have difficulty thinking outside the box or developing new ideas. You may
feel more comfortable sticking to traditional methods and approaches.
However, developing creativity can be useful in many areas of life. You
can try experimenting with art, reading different books, or attending
creative events to broaden your horizons and stimulate creative
thinking."
else:
return "Your answers suggest that you have an average
level of creativity. You have the ability to be creative, but you may
need additional motivation or inspiration to develop new ideas. It is
important to nurture your creativity, try new approaches and open
yourself up to new experiences to develop your creativity and bring new
ideas into your daily life and professional work."
def addAnswersToDB(answers, name, req):
quest = questionnaireData.objects.get(questionnaire_name=name)
strAnsw = ""
for str in answers:
strAnsw += str + "\n"
answ = questionnaireAnswers(questionnaire_name_fk=quest,
user=req.user, answers=strAnsw)
answ.save()
3.4 Testing the system
To evaluate the new software, the team opted for the black box method. Black box testing
involves assessing a software product without delving into its internal workings or operational
methods. This method is versatile, applicable for testing devices, systems, and conducting
acceptance and integration tests. It is often referred to as specification-based testing due to its
reliance on a distinct format for each test.
In black box testing, software testers typically do not require specialized knowledge of the
application's code, internal structure, or programming language. While they grasp the software's
general functionality, they remain unaware of its programming intricacies. For instance, a tester
recognizes that a specific input consistently yields a particular, predetermined output.
Table 3.1 - Testing the application
№ Test case Expected result The result
When entering a non- When entering a non-
singular login-password synchronous login-
Entering a non-
pair when logging in, the password pair when logging
synchronous login-
1 system displays a in, the system displays a
password pair when
notification that such a user notification that such a user
logging in
is not registered in the is not registered in the
system system
When attempting to log in When attempting to log in
Logging in without with empty fields, the with empty fields, the
2 entering a login-password system notifies the user that system notifies the user that
pair these fields must be filled these fields must be filled
in. in.
Entering different data in When entering mismatched When entering mismatched
the password fields and passwords, the system passwords, the system
3
repeating the password notifies the user that the notifies the user that the
during registration passwords do not match. passwords do not match.
When you try to create an When you try to create an
account without entering account without entering
Attempting to create an the required data, the the required data, the
4 account without entering system displays a system displays a
the required data notification that you need to notification that you need to
provide the appropriate provide the appropriate
data. data.
When you try to add a test When you try to add a test
with missing data, the with missing data, the
Attempting to add a test system displays a system displays a
5
with missing data notification to the notification to the
administrator to fill in the administrator to fill in the
fields. fields.
When you try to add a test When you try to add a test
with missing data, the with missing data, the
system displays a system displays a
6 Creating an empty survey
notification to the notification to the
administrator to fill in the administrator to fill in the
fields. fields.
When an ordinary user tries When an ordinary user tries
An attempt to access the to access the administrator to access the administrator
7 administrator functionality functionality, the system functionality, the system
by a regular user reacts as if this page does reacts as if this page does
not exist. not exist.
The test results show that the system is fully functional and ready to be used in real-world
conditions.
In the context of test and survey creation, testers encountered situations where they tried to
proceed without filling in all the necessary fields. In both cases, the system effectively flagged the
missing information, prompting administrators to complete the required fields before proceeding.
This functionality guarantees that tests and surveys are adequately configured, minimizing errors
and ensuring data accuracy.
Lastly, testers attempted to access administrator functionalities with regular user privileges.
In response, the system behaved as expected, denying access to unauthorized users and maintaining
the integrity of administrative controls.
In summary, the user testing process revealed that the system effectively handles various
scenarios, providing timely and informative feedback to users. These findings underscore the
system's reliability, security, and user-friendliness, positioning it as a valuable tool for researchers in
the field of experimental psychological research.
CONCLUSIONS
In the course of the thesis "Development of a website for planning and conducting
experimental psychological research," several significant results and conclusions of both theoretical
and practical importance were achieved, underscoring the pivotal role of integrating modern digital
technologies into psychological research.
Firstly, the study elucidates the critical importance of incorporating contemporary web
technologies into psychological research endeavors. By harnessing web-based platforms for
psychological testing, researchers gain access to a myriad of advantages, including enhanced data
collection, storage, and analysis capabilities. This facilitates expedited research processes, bolstering
efficiency and enabling researchers to engage with larger participant samples.
Secondly, the thesis sheds light on the paramount significance of optimizing user interface
design and user interaction. The emphasis on simplicity and intuitiveness within the platform
interface is paramount, particularly for users lacking technical expertise. By prioritizing user
experience through streamlined design and navigational clarity, researchers can ensure the effective
utilization of the platform by a diverse user base.
Furthermore, the development of a robust and scalable system architecture emerges as a
pivotal outcome of the study. Leveraging modern methodologies and technologies, such as Django,
empowers the creation of high-performance, secure, and adaptable solutions. This affords the
platform requisite stability and flexibility, facilitating the implementation of a diverse array of
psychological tests and research protocols.
The scientific novelty of the thesis lies in its integrated approach to system development,
which carefully considers the unique requirements of psychological testing alongside modern web
development standards. The resulting system not only simplifies research processes but also
introduces novel avenues for collecting and analyzing psychological data.
In terms of practical value, the thesis culminates in the creation of a functional tool with
tangible applications in psychological research practice. This versatile system serves as a valuable
resource for both academic researchers and practicing psychologists, furnishing them with a potent
instrument for test administration, result analysis, and research planning.
In summary, the development of a website for planning and conducting experimental
psychological research represents a seminal contribution to the advancement of contemporary
psychological science and practice. By furnishing researchers with effective and modern tools, this
initiative catalyzes innovation and fosters progress within the field.
Moreover, the comprehensive approach taken in this thesis underscores the dynamic
intersection between psychological research methodologies and technological advancements. By
bridging these domains, the study not only addresses current research needs but also anticipates
future trends and challenges within the discipline.
Beyond its immediate implications for research, the developed website holds promise for
broader societal impact. As psychological insights increasingly inform public policy, education, and
healthcare, the availability of robust research tools becomes instrumental in driving positive social
change. By empowering researchers with accessible and efficient platforms, this work contributes to
the broader mission of leveraging psychological science for the betterment of individuals and
communities.
Looking ahead, ongoing refinement and enhancement of the developed website will be
essential to ensure its continued relevance and effectiveness in the rapidly evolving landscape of
psychological research. This necessitates ongoing collaboration between psychologists, web
developers, and other stakeholders to iteratively improve the platform's features, usability, and
accessibility.
LIST OF REFERENCES
1. "What are browser developer tools?". MDN. Mozilla Corporation. URL:
[https://developer.mozilla.org/en-US/docs/Learn/Common_questions/Tools_and_setup/
What_are_browser_developer_tools#:~:text=Every%20modern%20web%20browser
%20includes,long%20they%20took%20to%20load.] (accessed 18.01.2024).
2. "Chrome DevTools". Chrome Developers. URL:
[https://developer.chrome.com/docs/devtools?hl=ru] (accessed 18.01.2024).
3. "Firefox DevTools User Docs". Firefox Source Docs. URL: [https://firefox-source-
docs.mozilla.org/devtools-user/] (accessed 18.01.2024).
4. "Safari Developer Tools overview". Apple Support (UK). URL: [https://open-safari-dev-
tools.novesta.cfd] (accessed 18.01.2024).
5. "Microsoft Edge DevTools documentation". Microsoft Learn. URL:
[https://learn.microsoft.com/ko-kr/microsoft-edge/developer/?view=webview2-1.0.774.44]
(accessed 18.01.2024).
6. "Explore advanced features: Streamline development with developer tools". Opera help.
URL: [https://help.opera.com/en/opera36/explore-advanced-features/] (accessed
18.01.2024).
7. "Developer Tools". MDN. Mozilla Corporation. 8 June 2023. URL:
[https://developer.mozilla.org/en-US/docs/Glossary/Developer_Tools] (accessed
18.01.2024).
8. "Growing Demand for Web Developers". Bright Hub. 5 February 2009. URL:
[https://sandbox.mu/the-digital-evolution-web-development-in-mauritius/] (accessed
18.01.2024).
9. Sobti, Kshitij. "Browsers are the new IDE for Web Development". think digit. 29 June 2012.
URL: [https://issuu.com/tunganhken/docs/201112] (accessed 18.01.2024).
10. "The Browser Console". Mozilla Hacks - the Web developer blog. URL:
[https://hacks.mozilla.org/2013/08/the-browser-console/] (accessed 18.01.2024).
11. "Web Console". MDN Web Docs. URL:
[https://developer.mozilla.org/ru/docs/Web/API/console] (accessed 18.01.2024).
12. 1"Browser Console". Mozilla Developer Network. 13 August 2016. URL:
[https://hacks.mozilla.org/2013/09/reintroducing-the-firefox-developer-tools-part-1-the-web-
console-and-the-javascript-debugger/] (accessed 18.01.2024).
13. "Chrome DevTools Overview - Google Chrome". Chrome Developers. URL:
[https://codedocs.org/what-is/web-development-tools] (accessed 18.01.2024).
14. McCormick, Libby. "F12 Developer Tools (Windows)". msdn.microsoft.com. 3 November
2017. URL: [https://www.microsoft.com/ru-ru/download/developer-tools.aspx] (accessed
18.01.2024).
15. erikadoyle. "Microsoft Edge Developer Tools - Microsoft Edge Development".
docs.microsoft.com. URL: [https://learn.microsoft.com/en-us/microsoft-edge/devtools-
guide-chromium/overview] (accessed 18.01.2024).
16. "Opera Browser | Faster, Safer, Smarter Web Browser". www.opera.com. URL: [resource
link] (accessed 18.01.2024).
17. "Tools - Safari - Apple Developer". Apple Developer. URL: [link to the archive] (accessed
18.01.2024).
18. piyushagg. "Browser Developer Tools". GeeksforGeeks. 27 January 2021. URL:
[https://www.opera.com/ua] (accessed 18.01.2024).
19. 1McCormick, Libby. "Introduction to F12 Developer Tools (Windows)".
msdn.microsoft.com. 3 March 2016. URL: [https://visualstudio.microsoft.com/ru/msdn-
platforms/] (accessed 18.01.2024).
20. "Inspect and Edit Pages and Styles | Tools for Web Developers". Google Developers. URL:
[https://developer.chrome.com/docs/devtools/css?hl=ru] (accessed 18.01.2024).Jey. "Firefox
gets 3D page inspector tool". Devlup. 10 січня 2012. URL: [Firefox gets 3D page inspector
tool] (accessed: 18.01.2024).
21. "3D view - Firefox Developer Tools | MDN". developer.mozilla.org. URL:
[https://superuser.com/questions/774846/does-firefox-still-have-the-3d-dom-viewer]
(accessed: 18.01.2024).
22. "Resources Panel - Google Chrome". Chrome Developers. URL:
[https://support.google.com/campaignmanager/answer/2828688?hl=ru#:~:text=Вкладка
%20Resources%20(Ресурсы)%20позволяет%20проверить,файлы%20JavaScript%20и
%20многое%20другое.] (accessed: 18.01.2024).
23. "Firefox Debuts New Developer Toolbar". The Mozilla Blog. URL:
[https://blog.mozilla.org/en/products/firefox/firefox-debuts-new-developer-toolbar/]
(accessed: 18.01.2024).
24. "Measure Resource Loading Times | Tools for Web Developers". Google Developers. URL:
[https://rabbitloader.com/articles/resource-loading/#:~:text=There%20are%20various
%20performance%20tools,used%20to%20measure%20Resource%20Loading.] (accessed:
18.01.2024).
25. "Profiles Panel - Google Chrome". developers.google.com. URL:
[https://support.google.com/chrome/answer/2364824?hl=en&co=GENIE.Platform
%3DDesktop] (accessed: 18.01.2024).
26. McCormick, Libby. "F12 developer tools console error messages (Windows)".
msdn.microsoft.com. 3 березня 2016. URL: [https://learn.microsoft.com/en-us/microsoft-
edge/devtools-guide-chromium/console/console-debug-javascript] (accessed: 18.01.2024).
27. McCormick, Libby. "Using the Profiler Tool to analyze the performance of your code
(Windows)". msdn.microsoft.com. 3 березня 2016. URL: [https://learn.microsoft.com/en-
us/visualstudio/profiling/profiling-feature-tour?view=vs-2022] (accessed: 18.01.2024).
28. "New Firefox Command Line helps you develop faster". Mozilla Hacks – the Web developer
blog. URL: [https://hacks.mozilla.org/2012/08/new-firefox-command-line-helps-you-
develop-faster/] (accessed: 18.01.2024).
29. "Opera Browser | Faster, Safer, Smarter Web Browser". www.opera.com. URL:
[www.opera.com] (accessed: 18.01.2024).
30. "Using the Console | Tools for Web Developers". Google Developers. URL:
[https://developer.chrome.com/docs/devtools/console?hl=ru] (accessed: 18.01.2024).