Project
Project
1. INTRODUCTION
2. INTRODUCTION TO ENCRYPTION
o A Brief History of Encryption
3. TECHNOLOGY STACK OVERVIEW
o Programming Language: C#
o Backend Framework: Flask (Python)
o Cloud Database: Firebase Firestore
o Frontend Framework: WPF (Windows Presentation Foundation)
4. APPLICATION MODULES
o User Interface
Main Window
Friends List Panel
Chat Window
Message Input Area
File Transfer Section
o Functional Components
Login and Registration
Real-time Messaging
File Upload and Download
Friend Request Handling
Chat Polling and Refresh
5. ENCRYPTION AND SECURITY
o AES-256 Encryption Overview
o Encryption in C# using AES
o Secure Message Transmission
o Message Decryption Logic
6. BACKEND INTEGRATION
o Flask API Overview
o Firebase Firestore Structure
Message Storage Format
File Metadata Schema
o Hosting on Render
7. SYSTEM DESIGN AND ARCHITECTURE
o Project Architecture Diagram
o Data Flow Diagram
o ER Diagram and Use Cases
8. PROJECT IMPLEMENTATION DETAILS
o Setting up the Environment
o Chat Polling with DispatcherTimer
o Message Handling with HashSet
o Decryption and Message Mapping
9. USER EXPERIENCE AND UI DESIGN
o Styling with XAML
o Color Schemes and Responsiveness
o Notification and Reload Button
10. TESTING STRATEGIES
o Unit Testing for Encryption
o Integration Testing with Firebase
o UI Testing with WPF
11. MAINTENANCE AND DEPLOYMENT
o Requirement.txt and Procfile
o Render Deployment Flow
o Update and Bug Fix Strategy
12. FUTURE ENHANCEMENTS
o Integration of SignalR for Real-Time WebSockets
o Video and Voice Chat
o File Compression before Transfer
13. REFERENCES
14. APPENDICES
o Sample Code Snippets
o Screenshots of Application
o Firebase JSON Structure
o User Manual
INTRODUCTION.
In the modern digital age, where communication is at the heart of personal relationships,
business collaborations, and information exchange, ensuring the privacy and integrity of shared
information has become a crucial challenge. Traditional chat applications often lack end-to-end
encryption, leaving data vulnerable to unauthorized access and breaches. The emergence of
cyberattacks and increased digital surveillance necessitates the development of a more secure
and user-controlled communication system.
WHISPR is a desktop-based encrypted messaging and file-sharing platform designed to meet this
challenge. It ensures user confidentiality and message integrity by implementing strong AES-256
encryption and secure backend communication through Firebase and Flask. The objective of the
application is to provide a real-time, safe, and user-friendly chat interface for users who demand
high security in digital communication.
The application is built using a combination of modern technologies such as C# for the core
logic, WPF (Windows Presentation Foundation) for the UI design, Flask (Python) for backend
API services, and Firebase Firestore for real-time cloud-based data handling. The choice of these
technologies ensures reliability, efficiency, and a smooth development and deployment process.
Render is used for backend hosting, allowing the server APIs to be publicly available with
minimal configuration overhead.
One of the key features of WHISPR is its user-oriented interface. It includes features such as
friend request handling, encrypted file sharing, real-time chat updates via polling, and message
encryption/decryption at the application level. These features are integrated in a way that
promotes seamless usability while maintaining strict security standards.
Moreover, the system includes real-time polling using DispatcherTimer to fetch messages every
few seconds, maintaining updated conversations without overwhelming the server. Additionally,
by storing encrypted messages and decrypting them only on the receiver's end, WHISPR
maintains end-to-end confidentiality.
This project report details every aspect of WHISPR’s design and functionality—from the initial
motivation and technology stack to the implementation and testing processes. It aims to
demonstrate how robust system architecture and secure design principles can come together to
build an efficient and highly secure messaging platform for future applications, whether for
enterprise use or personal communication.
INTRODUCTION TO ENCRYPTION
Encryption is the process of information protection by transforming readable data, often referred
to as plaintext, into an unreadable format known as ciphertext, using an algorithm and an
encryption key. If data falls into an unauthorized party’s hands, it cannot be read without having
the correct encryption keys to decrypt the data. Only members of an organization that have
access to the encryption key(s) can translate the files to make them readable.
The concept of encryption could be compared to protecting your home from being accessed by
strangers, with a lock and key. In this analogy, the “lock” equates to encryption, keeping
unauthorized individuals from accessing your house (or your data). Your house “key” is the
equivalent to an encryption key, which is used to lock and unlock the door to the house.
Only individuals that possess the key can lock / unlock the front door to your house (or encrypt /
decrypt your data). If the key falls into the wrong hands, someone that you don’t want entering
your home could break in and steal your personal belongings. If a cybercriminal steals or
manages to forge an encryption key, the ‘door’ is wide open for data compromise. Indeed, lock
picking with enough time may also allow access to unauthorized entities. Remember, encryption
of data does not guarantee its confidentiality, it is a way of making it harder to access. Hard
enough to dissuade even thinking about breaking in.
There are two different types of cryptographic algorithms – symmetric and asymmetric. A
symmetric (private key) algorithm consists of a single key, that is only distributed amongst
trusted members of an organization that are allowed access to sensitive data. These members use
that key to both encrypt and decrypt information.
An example of symmetric encryption is a password-protected PDF. The creator of the PDF
secures the document using a passcode. Only authorized recipients that possess that passcode are
able to view and read that PDF. In this instance, encrypting and decrypting are limited to
individuals that possess the single key.
An asymmetric algorithm consists of two different keys – a private key and a public key. The
private key is kept secret and only accessible by authorized users, while the public key can be
shared freely. While the public key can be shared with anyone to encrypt plaintext, the private
key is required to then decrypt that ciphertext. An example of asymmetric encryption would be
sending out an email via a platform called Pretty Good Privacy (PGP).
Anyone within an organization could use PGP to send out an encrypted email, using the
recipient’s public key. However, only the recipient could decrypt and read the email, using their
private decryption key.
Brief History of Encryption
Encryption, the practice of encoding messages to ensure they can only be
read by the intended recipient, has a rich history spanning thousands of years. The earliest known
use of encryption dates back to around 1900 BC, when a scribe in Egypt used unexpected
hieroglyphic characters to encode messages.This was a simple form of substitution cipher, where
characters were replaced with others to obscure the original message.
One of the most famous early encryption methods is the Caesar cipher, developed by Julius
Caesar around 60 BC. This cipher involves shifting each letter in the alphabet by a fixed number
of positions. For example, with a shift of three, 'A' becomes 'D', 'B' becomes 'E', and so on.
Despite its simplicity, the Caesar cipher remained effective for centuries because it relied on the
secrecy of the shift value rather than the system itself.
The development of encryption continued to evolve, with significant advancements in the 20th
century. In 1976, Whitfield Diffie and Martin Hellman introduced the concept of public-key
cryptography, which allows for secure communication without the need to share a secret key in
advance. This breakthrough led to the development of algorithms like RSA, which uses the
mathematical difficulty of factoring large prime numbers to ensure security.
The 1970s also saw the creation of the Data Encryption Standard (DES), developed by IBM and
adopted by the US government in 1973. DES used a symmetric key algorithm, meaning the same
key was used for both encryption and decryption. However, by the late 1990s, DES was found to
be vulnerable to brute-force attacks due to its relatively short key length.
In response to the limitations of DES, the Advanced Encryption Standard (AES) was introduced
in 2001. AES, also a symmetric key algorithm, offers a higher level of security and is widely
used today for securing sensitive data.27 It has been adopted by governments and organizations
around the world as a standard for encryption.
Quantum computing poses a new challenge to traditional encryption methods, as quantum
computers can potentially break many of the encryption algorithms currently in use. To address
this, researchers are developing post-quantum cryptographic (PQC) algorithms that are resistant
to attacks from both classical and quantum computers.2 The National Institute of Standards and
Technology (NIST) is leading efforts to standardize these new algorithms.
Encryption has become an essential tool for securing digital communications in the modern era.
It is used not only by governments and military organizations but also by individuals and
businesses to protect personal information, financial transactions, and confidential
communications. As technology continues to advance, the field of cryptography will likely see
further developments to meet the evolving security challenges.
Today, encryption is a critical component of cybersecurity, protecting data in transit and at rest,
and ensuring the privacy and security of digital communications. As encryption technologies
continue to evolve, they will play a crucial role in safeguarding information in an increasingly
digital world.
1. TECHNOLOGY STACK OVERVIEW
A technology stack, often referred to as a tech stack, is a set of technologies, software,
and tools used in the development and deployment of sites, apps, and other digital
products.
It consists of two main parts: the frontend (client-side) and backend (server-side).
The frontend includes technologies like JavaScript, HTML, CSS, and frameworks such
as Vue, React, and Angular.
The backend involves programming languages like Node, Java, Python, and PHP, along
with web application frameworks like Spring and Django.
Additionally, databases, event and messaging systems, infrastructure, virtualization, and
mobile application technologies are part of the tech stack.
The choice of a tech stack can significantly impact the design, functionality, and
scalability of a web or mobile application.
It is crucial for startups and small businesses to carefully select an applicable tech stack
to increase their chances of developing a successful software product.
Programming Language: C#
C# is a general-purpose, high-level programming language developed by Microsoft that
supports multiple programming paradigms including object-oriented, functional, and
component-oriented programming.
It was first widely distributed in July 2000 and was later approved as an international
standard by Ecma (ECMA-334) in 2002 and ISO/IEC (ISO/IEC 23270 and 20619) in
2003.
C# is part of the C family of languages, which also includes C and C++, but it is
considered the most modern and easiest to learn due to its high-level nature.
C# also offers a wide range of libraries and frameworks that make it easy to work across
different operating systems and platforms.
This flexibility and comprehensive support make it a versatile choice for developers
looking to build scalable and robust applications.
For those interested in learning C#, resources like the C# documentation on Microsoft's
website and tutorials on GeeksforGeeks provide a comprehensive guide from beginner to
advanced levels.
These resources cover everything from setting up the environment and writing a "Hello
World" program to advanced topics like object-oriented programming, multithreading,
and LINQ.
At its core, Flask is built on two key components: Werkzeug, a WSGI (Web Server
Gateway Interface) utility library that handles low-level web server interactions, and
Jinja2, a templating engine that simplifies dynamic HTML generation. These
foundational libraries give Flask the ability to manage HTTP requests, route URLs to
Python functions, and render templates efficiently. Because Flask does not include built-
in features like database abstraction layers or authentication systems, developers can
choose the best tools for their projects, such as SQLAlchemy for databases or Flask-
Login for user sessions.
One of Flask’s defining features is its simplicity in setting up and running a basic web
application. A minimal Flask app can be written in just a few lines of code, making it an
excellent choice for beginners and rapid prototyping. For example, a simple "Hello,
World!" application requires only importing Flask, creating an app instance, and defining
a route with a function that returns a response. The built-in development server allows
developers to test their applications immediately without complex configurations.
Flask excels in building RESTful APIs, thanks to its lightweight nature and compatibility
with extensions like Flask-RESTful. Developers can easily define API endpoints, parse
JSON requests, and return structured responses, making Flask a popular choice for
backend services in modern web and mobile applications. Additionally, Flask integrates
seamlessly with frontend frameworks like React or Vue.js, enabling full-stack
development with a clear separation between backend and frontend logic.
Another advantage of Flask is its extensive ecosystem of extensions. While Flask itself
remains minimal, developers can enhance functionality by adding libraries such as Flask-
SQLAlchemy for database integration, Flask-WTF for form handling, and Flask-CORS
for cross-origin resource sharing. This modularity ensures that applications remain
lightweight while still being able to scale with additional features as needed.
Despite its simplicity, Flask is capable of handling complex applications when properly
structured. Larger projects often use Flask’s blueprint feature to organize code into
reusable modules, improving maintainability. Furthermore, Flask can be deployed in
production using WSGI servers like Gunicorn or uWSGI, often behind a reverse proxy
like Nginx for better performance and security.
One of Firestore's most powerful features is its real-time synchronization system. When
clients connect to Firestore, they can listen to queries and receive instantaneous updates
whenever matching documents change. This push-based architecture eliminates the need
for polling and enables highly responsive user experiences. The synchronization works
across all connected devices, automatically handling network interruptions and conflicts
through version vectors and offline persistence.
For mobile applications, Firestore provides robust offline support. The SDKs maintain a
local cache of recently accessed data and can queue write operations when offline. Once
connectivity is restored, changes are automatically synchronized with the cloud database
while resolving any conflicts according to predefined rules. This capability allows
applications to remain fully functional regardless of network conditions.
Firestore scales automatically to handle whatever load your application generates. Behind
the scenes, it uses Google's global infrastructure to distribute data across multiple
regions, ensuring low latency access worldwide. The database employs automatic
sharding to distribute query load and can handle spikes in traffic without manual
intervention. This makes it particularly suitable for applications with unpredictable
growth patterns.
Security in Firestore is implemented through declarative security rules. These rules use a
JavaScript-like syntax to define precisely which users can read or write specific
documents. Rules can leverage Firebase Authentication to make decisions based on user
identity and can even validate data structures before allowing writes. This security model
operates at the database level, providing protection regardless of how clients connect to
Firestore.
Performance optimization is built into Firestore's query model. All queries scale with the
size of your result set rather than your total data volume, and the database automatically
maintains indexes for all fields to ensure consistent performance. Composite indexes can
be defined for more complex query patterns, and the SDKs include local caching to
minimize network requests.
Integration with other Firebase services is seamless. Firestore works particularly well
with Firebase Authentication for user management, Cloud Functions for server-side
processing, and Firebase Storage for file attachments. The database also provides official
SDKs for iOS, Android, web, and server-side environments like Node.js, Python, and Go.
For developers building applications with Python backends (like Flask), Firestore can be
accessed through the Firebase Admin SDK. This allows server-side code to perform
privileged operations while still leveraging all of Firestore's core features. The Admin
SDK bypasses client-side security rules, making it suitable for backend processing and
administrative functions.
Firestore's pricing model is based on operations rather than traditional capacity planning.
Costs are calculated from the number of reads, writes, and deletes performed, along with
network bandwidth and stored data volume. This pay-as-you-go approach can be cost-
effective for many applications, though it requires developers to understand their access
patterns to optimize expenses.
The database supports atomic transactions across multiple documents, ensuring data
consistency for critical operations. These transactions use optimistic concurrency control
to handle conflicts automatically. Firestore also provides powerful query capabilities
including filtering, sorting, and pagination, though with some intentional limitations
designed to prevent performance issues.
Frontend Framework: WPF (Windows Presentation Foundation)
APPLICATION MODULES
The User Interface (UI) of the application is designed to provide an intuitive, responsive, and
visually engaging experience for users. Built using Windows Presentation Foundation (WPF),
the UI is structured into several key components, each serving a distinct purpose in facilitating
seamless communication and interaction. The design follows modern UX principles,
incorporating smooth animations, real-time updates, and adaptive layouts to ensure usability
across different screen sizes and resolutions.
1. Main Window
The Extrovet main window presents a sophisticated yet minimalist interface designed for optimal
social interaction management. Following contemporary UX principles, the layout employs
a three-zone partitioning system:
Left Navigation Rail (200px width)
Constructed as a vertical StackPanel with Segoe Fluent Icons
Features semi-transparent acrylic material with 80% opacity
Contains two primary navigation items: "Requests" and "Chat"
Each nav item includes:
o Icon button (48x48px hit target)
o Text label (13pt semi-bold Segoe UI)
o Notification badge (red circular counter for unread items)
Implements highlight effects using WPF's VisualStateManager for hover/pressed states
Central Content Area (Fluid width)
Utilizes a transitioning ContentControl with swipe animations
Hosts two primary views:
o Requests View: Friend request management interface
o Chat View: Conversation list and messaging interface
Implements UI virtualization through VirtualizingStackPanel
Features continuous scroll with dynamic loading threshold
Status Bar (30px height)
Displays three key information segments:
o Connection status (Wi-Fi/LAN signal strength indicator)
o Notification center (Message counter icon)
o System clock (HH:MM 24-hour format)
2. Visual Design System
Typography Hierarchy
Window title: 14pt Segoe UI SemiBold (Left-aligned)
Navigation labels: 11pt Segoe UI Regular (All-caps)
Content headers: 16pt Segoe UI SemiBold
Body text: 13pt Segoe UI Regular
Color Palette
Primary: #6750A4 (Deep purple - Microsoft Fluent palette)
Surface: #F3F3F3 (Light mode background)
On-surface: #1A1A1A (Text primary)
Secondary: #727272 (Text secondary)
Animation System
Navigation transitions: 300ms fade+slide animation (using WPF's Storyboard)
Content loading: 150ms opacity fade-in
Hover effects: 100ms color/scale transformations
Notification pulses: 800ms heartbeat animation
3. Interaction Patterns
Navigation Behavior
Clicking "Requests" loads the friend request management view
Clicking "Chat" displays the conversation list
Both buttons maintain persistent selection states (accent underline indicator)
Supports keyboard shortcuts (Alt+R for Requests, Alt+C for Chat)
Responsive Adaptations
Compact mode (<800px width):
o Navigation rail collapses to icon-only (48px width)
o Labels appear in tooltip on hover
Mobile emulation (<500px width):
o Activates bottom navigation bar
o Content area gains vertical scrolling
4. Technical Implementation Details
Performance Optimizations
UI virtualization for all scrollable lists
Bitmap caching for static navigation elements
Asynchronous loading of profile pictures
Debounced search (300ms delay) in request/chat lists
Accessibility Features
Keyboard navigation (Tab sequence management)
High contrast mode support
Screen reader annotations via AutomationProperties
Dynamic scaling (96-144 DPI support)
5. State Management
Visual States
Active view (Requests/Chat selection indicator)
Unread notifications (Pulsing badge animation)
Connection status (Animated transition between states)
Typing indicators (Ellipsis animation in chat list)
Data Binding
Navigation items bound to ObservableCollection<NavItem>
Unread counts via INotifyPropertyChanged
View switching through DataTemplate selection
6. Future Enhancement Pathways
Planned Features
Pinned chats section in navigation
Status customization (Online/Offline/Busy)
Theme engine (Light/dark/system preference)
Multi-window support for chat conversations
3. Chat Window
The Chat Window in Extrovet represents the core messaging interface where users engage in
real-time conversations. Designed with both form and function in mind, this component
combines clean aesthetics with robust technical implementation to create a seamless
communication experience. The window follows a structured layout that prioritizes message
clarity while supporting advanced communication features.
At the top of the interface sits the conversation header, a persistent element that displays the
recipient's profile information and current availability status. This area includes interactive
controls for accessing additional communication options such as voice and video calling. A thin
visual separator distinguishes the header from the primary message display area below.
The message display region forms the central focus of the window, presenting conversation
history in a reverse-chronological format with automatic scrolling to keep the most recent
messages visible. This area implements sophisticated rendering techniques to maintain
performance during rapid message exchanges. Each message appears within a visually distinct
container that adapts its appearance based on message origin, creating clear differentiation
between sent and received communications.
Message containers incorporate multiple information layers including the message content itself,
precise timestamp data, and read status indicators. The design employs subtle animations when
new messages arrive, using smooth entrance transitions that maintain conversation flow without
causing visual disruption. The system automatically handles message grouping, clustering
consecutive messages from the same sender to reduce interface clutter.
Below the message history, the composition area provides tools for creating and sending new
messages. This section features an adaptive input field that expands to accommodate multi-line
content while maintaining all necessary formatting controls. The interface includes discreet but
accessible options for attaching files, inserting emoji, and formatting text. A dedicated send
button provides clear affordance while supporting keyboard shortcuts for power users.
The window implements a comprehensive typing indicator system that shows real-time feedback
when other participants are composing messages. These indicators appear within the message
flow without disrupting existing content layout. The interface also handles system messages and
notifications, displaying them with appropriate visual treatment to distinguish them from regular
conversation content.
Visual customization forms a key aspect of the chat experience, with support for multiple color
themes including light and dark modes. The interface respects system-wide accessibility settings,
automatically adjusting text sizes and contrast ratios when needed. All interactive elements
provide clear visual feedback during user engagement, with hover states and active effects that
confirm user actions.
Performance optimization ensures smooth operation during extended conversations. The
implementation uses intelligent message caching and virtualization to maintain responsiveness
regardless of conversation length. Background processes handle message synchronization and
delivery confirmation without impacting the foreground user experience.
The design incorporates subtle but effective motion principles throughout the interface. Message
transitions use physics-based animations that feel natural and responsive. Interactive elements
employ micro-interactions that provide tactile feedback during use. These animations are
carefully tuned to enhance usability without becoming distracting.
Accessibility features are deeply integrated into the chat experience. The interface supports full
keyboard navigation and screen reader compatibility. Message timestamps and status indicators
include ARIA labels for assistive technology. Color choices meet WCAG contrast requirements,
and all functional elements maintain adequate touch target sizes for mobile use.
Error states and edge cases receive special attention in the design. Network interruptions trigger
informative but unobtrusive status messages. Failed message delivery includes clear recovery
options. The interface maintains full functionality during connectivity issues with appropriate
visual cues about sync status.
The Chat Window represents a careful balance between visual simplicity and functional depth.
Every design decision focuses on reducing cognitive load while maintaining access to powerful
communication features. The result is an interface that feels immediately familiar yet capable of
handling complex messaging scenarios with ease.
The input system supports rich media integration through a discreet but accessible toolbar that
appears contextually when needed. Emoji selection employs a categorized picker interface with
frequently used items and recent selections readily available. Sticker integration goes beyond
simple insertion, offering suggested stickers based on message content and conversation history.
These media elements maintain consistent sizing and alignment within the message flow.
For technical communication, the area includes specialized support for code snippets and
technical formatting. Syntax highlighting activates automatically when detecting code blocks,
with language recognition that adjusts coloring appropriately. The system maintains proper
indentation and structure during editing, preventing formatting issues during composition. A full-
screen mode provides additional space for working with complex code segments.
The interface implements smart mention functionality that suggests conversation participants as
the user types @ symbols. These mentions appear as interactive elements in the composed
message, complete with profile hover cards for verification. The system prevents duplicate
mentions and automatically handles notification triggers when messages are sent.
Message drafting features include robust undo/redo support that maintains history across
sessions. Cloud-synced drafts automatically preserve unfinished compositions, even when
switching between devices. The interface provides clear visual indicators when drafts exist, with
options to review or discard them before sending.
The send mechanism offers multiple engagement options while maintaining simplicity. A
prominent send button provides clear primary action, while keyboard shortcuts allow rapid
sending without mouse interaction. The system intelligently handles enter key behavior,
supporting both single-line sends and multi-line composition based on user preference settings.
Accessibility features permeate the design, with proper labeling for screen readers and keyboard
navigation that follows logical patterns. The interface adapts to system text size preferences and
high contrast modes without losing functionality. All interactive elements provide adequate
touch targets and clear visual feedback during interaction.
Error prevention and recovery mechanisms are carefully implemented. The system validates
message content before sending, warning about potential issues like empty messages or large file
attachments. Network interruptions trigger automatic queuing of outgoing messages with clear
status indicators. Failed sends include one-tap retry options and alternative delivery methods.
The visual design maintains harmony with the overall application aesthetic while providing
necessary functional distinctions. The input area uses subtle elevation to establish hierarchy, with
careful shadow application that doesn't overwhelm the interface. Interactive states employ
tasteful animations that confirm user actions without unnecessary distraction.
Performance optimization ensures responsive typing even during system load. The
implementation uses efficient text rendering and event handling to maintain smooth composition
regardless of message complexity. Background processing handles formatting analysis and
suggestion generation without impacting foreground responsiveness.
The Message Input Area represents a careful balance between simplicity and power, providing
advanced features when needed while maintaining an unobtrusive presence during basic .
2. Functional Components
The login and authentication system serves as the secure entry point to Extrovet's
communication platform, implementing a robust multi-stage verification process
designed to balance security with user convenience. This sophisticated infrastructure
handles all aspects of user identity management from initial registration through ongoing
session maintenance.
The registration workflow begins with a streamlined interface collecting only essential
credentials - name and email address - while deferring additional profile details to post-
authentication completion. Behind this simple facade lies a complex validation engine
that performs real-time checks on email format validity using regular expression pattern
matching against RFC standards. The system simultaneously verifies email domain
existence through DNS record validation while checking for disposable or blacklisted
email providers.
Username selection incorporates similar rigor, enforcing uniqueness through instant
database queries while applying linguistic analysis to prevent impersonation attempts.
The interface provides intelligent suggestions when desired names are unavailable,
generating variants based on common modifications while maintaining the user's original
naming intent. All suggestions are checked against reserved terms and inappropriate
language filters.
The friend request handling system manages the complete lifecycle of social connections
within Extrovet. When a user initiates a friend request, the system validates the
relationship possibility (preventing duplicate requests and checking against blocked
users) before creating a pending connection record. Recipients receive real-time
notifications through multiple channels including in-app alerts and optional email
notifications. The interface presents incoming requests with contextual information
including mutual connections and profile compatibility indicators. Acceptance triggers a
bidirectional relationship establishment, while rejection implements a cooling-off period
before allowing subsequent requests. The system maintains detailed privacy controls
allowing users to specify exactly what information new friends can access. For spam
prevention, automatic rate limiting restricts excessive friend request activity, with
machine learning algorithms detecting and flagging suspicious connection patterns. All
friend interactions are logged for moderation purposes, with clear audit trails visible to
both participants. The architecture supports bulk operations for managing large friend
networks while maintaining responsive performance as social graphs expand.
The Advanced Encryption Standard (AES) with 256-bit keys forms the cryptographic
foundation of Extrovet's security model. This symmetric block cipher algorithm, certified
by the U.S. National Security Agency for top-secret information, operates on 128-bit
blocks through multiple transformation rounds. The 256-bit variant provides a key space
of 2^256 possible combinations, making brute-force attacks computationally infeasible
even with quantum computing advancements. AES-256's substitution-permutation
network structure applies four core operations: ByteSub (non-linear substitution),
ShiftRow (transposition), MixColumn (linear mixing), and AddRoundKey (XOR with
round key). These operations repeat through 14 rounds for 256-bit keys, with each round
deriving unique subkeys through key expansion. The cipher's Feistel-like structure
ensures complete diffusion and confusion properties, where a single bit change in
plaintext affects the entire ciphertext unpredictably. Extrovet implements AES in
Galois/Counter Mode (GCM) which provides both confidentiality and authenticity
through built-in message authentication codes (MACs), preventing ciphertext tampering
while encrypting.
Key derivation through PBKDF2 with 100,000 iterations for password-based encryption
Proper memory management of sensitive data using SecureString and pinned byte arrays
Constant-time comparison operations to prevent timing attacks
Flask is a lightweight and flexible web framework for Python, commonly used for
building APIs. It follows a minimalist design philosophy, allowing developers to create
scalable and efficient API endpoints. Flask is particularly favored for microservices,
where modularity and performance are critical.
The API framework is designed using RESTful principles, meaning it adheres to standard
methods for resource management. These methods include GET for retrieving data,
POST for creating new entries, PUT for updating existing records, and DELETE for
removing resources. Communication is facilitated through JSON payloads, which
provide a lightweight and structured format for data exchange between clients and
servers.
Authentication and security are integral to the system, and for that purpose, Firebase
Auth is employed. Firebase Auth provides seamless user verification and authentication
using JSON Web Tokens (JWT). JWT-based authentication ensures that users'
credentials remain secure and that each API request carries the necessary authorization to
access protected resources.
Containerization plays a vital role in ensuring scalability and portability of the API. Flask
applications are packaged into Docker containers, enabling consistent deployments across
various environments. This containerized approach allows for quick scaling of API
instances without worrying about dependency conflicts or environment inconsistencies.
Kubernetes orchestrates container management, providing automated scaling and fault
tolerance. When traffic increases, Kubernetes provisions additional API instances to
balance the load, ensuring optimal performance. Load balancing mechanisms distribute
requests evenly across available instances, preventing any single API node from
becoming overloaded.
Logging and debugging mechanisms are built into the system to track API activity and
diagnose issues. Logs capture details of each request, response status codes, and
execution times, helping developers analyze trends and optimize performance.
Debugging tools facilitate error detection, enabling developers to quickly resolve faults
within the application.
With all these elements combined, Flask provides a powerful API framework that excels
in efficiency, scalability, and security. It enables developers to build robust applications
that integrate seamlessly with front-end systems, databases, and authentication services.
Whether for small-scale applications or enterprise solutions, Flask remains a top choice
for designing and deploying web APIs.
At the highest level, the database organizes data into three primary collection groups that
work in concert to support the application's communication features. The users collection
serves as the central repository for all user profile information, storing essential account
details including hashed authentication credentials, display names, contact information,
and cryptographic key material used for end-to-end encryption. Each user document
contains carefully normalized fields to support both efficient querying and secure access,
with sensitive information stored in encrypted form while maintaining indexable
metadata for search functionality.
The conversations collection acts as the organizational backbone for message threads,
containing metadata about each communication channel while delegating actual message
storage to subordinate collections. Each conversation document maintains references to
all participants through carefully designed array fields that enable efficient membership
checks, along with aggregate statistics about message volume and timing to support list
views without requiring expensive subcollection queries. The document structure
includes last-update timestamps managed through Firestore's native server timestamp
feature, ensuring consistent ordering across all client devices.
Nested beneath each conversation document, the messages subcollection contains the
complete history of encrypted communications in chronological order. The subcollection
architecture provides automatic partitioning of message data while maintaining the
parent-child relationship essential for proper access control. Message documents employ
a compound document ID strategy combining millisecond-precision timestamps with
random suffixes to prevent collisions while maintaining sort order. The document fields
include both the encrypted payload and essential metadata required for proper message
rendering and synchronization, with all cryptographic parameters stored alongside the
content to enable seamless decryption across client platforms.
Security rules form a critical component of the database structure, enforcing granular
access controls at both the collection and document levels. The rules implementation
follows a principle of least privilege, requiring explicit proof of membership for
conversation access while allowing limited public read access to necessary user profile
fields. The rules engine evaluates each operation against complex logical conditions that
verify authentication state, document ownership, and data validation requirements before
permitting any read or write operation. These rules work in tandem with Firebase
Authentication to provide a complete security solution that protects sensitive data while
allowing necessary access patterns.
Indexing strategies play a crucial role in the database's performance characteristics. The
structure includes composite indexes covering all common query patterns, including user
searches, conversation lookups, and message history retrieval. These indexes are
carefully tuned to balance query performance with update efficiency, particularly
important for the high-volume messages subcollection. The indexing approach also
considers the security rules to prevent potential performance degradation during
permission checks.
The database implements a robust versioning system that accommodates future schema
evolution without breaking existing clients. Each document includes a schema version
field that allows for backward-compatible processing logic, with migration routines
handled through background cloud functions. This versioning approach enables seamless
introduction of new features while maintaining access to historical data.
Error handling and conflict resolution strategies are baked into the database structure at
multiple levels. All write operations include optimistic concurrency control markers that
prevent accidental overwrites, while the real-time synchronization system automatically
resolves conflicts using last-write-wins semantics for non-critical fields and application-
defined merge logic for important data elements. The structure includes dedicated
collections for tracking synchronization anomalies and resolution outcomes.
Message documents follow a strict schema that separates encrypted content from
necessary metadata for proper system operation. The encrypted payload itself occupies a
single field containing the ciphertext in Base64 encoding, accompanied by separate fields
storing the initialization vector and authentication tag required for successful decryption.
This separation allows the system to verify message integrity before attempting
decryption while keeping all cryptographic parameters associated with their respective
payloads.
Metadata fields serve multiple critical functions in the message ecosystem. Precise
timestamps recorded with microsecond resolution provide unambiguous chronological
ordering, while server-side timestamping prevents client clock manipulation. Sender
identifiers use Firebase's native reference system to maintain secure pointers to user
documents without exposing sensitive information. Delivery status fields implement a
state machine tracking message progression from sent to delivered to read, with each
transition marked by server-verified timestamps.
The content type classification system supports extensible media handling through a
numeric type indicator and supplemental metadata fields. Basic text messages require
minimal additional data, while rich media messages include content-specific metadata
such as image dimensions, file sizes, and MIME types. This information remains
unencrypted to facilitate intelligent rendering and bandwidth optimization while
containing no sensitive user data.
File attachments follow a specialized storage paradigm that combines Firestore's query
capabilities with Firebase Storage's binary object handling. The system generates a
unique content-addressable identifier for each attachment based on its cryptographic
hash, storing this reference alongside descriptive metadata in Firestore. The actual binary
content resides in Firebase Storage with strict access controls, retrievable only through
temporary authenticated URLs. This separation allows efficient metadata queries without
exposing binary content to unauthorized access.
The encryption key management system employs a sophisticated key derivation approach
where each conversation generates a unique symmetric key. These conversation keys are
themselves encrypted with each participant's public key and stored in a dedicated
subcollection, ensuring only intended recipients can access the content. The system
supports seamless key rotation without service interruption through versioned key storage
and automatic re-encryption of recent messages during idle periods.
Message documents include additional security parameters that strengthen the overall
system's resilience. Cryptographic nonces prevent replay attacks, while explicit algorithm
identifiers future-proof the system against evolving standards. Version numbers allow for
backward-compatible upgrades to the encryption scheme without breaking existing
conversations. These parameters are verified by strict validation rules before message
acceptance.
The storage format accommodates special message types through extensible type
handlers. System messages like "user joined" notifications employ a distinct encryption
regime that permits server generation while maintaining authenticity guarantees.
Ephemeral messages include additional fields controlling their visibility window and
automatic deletion triggers. Custom message types can extend the base schema through a
plugin architecture that maintains core security properties.
Delivery receipts implement an optimized storage pattern that minimizes write operations
while providing reliable status tracking. Rather than individual status documents, the
system employs batched updates to a dedicated status field within the message document
itself. This approach reduces Firestore costs while maintaining real-time visibility into
message propagation through the network.
The architecture includes specialized handling for message edits and deletions that
maintains conversation integrity while respecting user expectations. Edit histories are
stored as encrypted diffs with forward references, allowing clients to reconstruct the
conversation state at any point in time. Deletion markers employ Firestore's native field
deletion feature while maintaining the document shell for chronological consistency.
Search functionality is enabled through carefully designed unencrypted metadata fields
that support Firestore's query capabilities while revealing no sensitive information. These
include word count metrics, language identifiers, and topic classifiers generated during
message composition. Full content search is handled client-side after decryption for user-
owned messages.
Storage quotas and retention policies are enforced through a combination of Firestore
rules and cloud functions. Users receive configurable storage limits with intelligent
prioritization that preserves recent conversations while automatically archiving older
messages. The system provides clear visual indicators when approaching storage limits.
The message format has been optimized for efficient real-time synchronization. Small
field counts and consistent field ordering minimize serialization overhead, while delta
encoding reduces bandwidth consumption during updates. These optimizations are
particularly valuable for mobile devices operating on constrained networks.
Future extensibility is baked into the message format through reserved field names and
versioned schema definitions. New message types can be introduced without breaking
existing clients, and the encryption system supports pluggable modules for emerging
cryptographic standards. This forward-looking design ensures the platform can evolve
alongside advancing security requirements.
Cloud Hosting on Render operates with high efficiency and reliability, ensuring seamless
scalability and optimal performance for production environments. The deployment
architecture is designed to support high availability through automatic scaling
mechanisms, reducing downtime and enhancing service continuity. Render’s cloud
infrastructure facilitates API endpoint hosting with web services running on load-
balanced instances, enabling consistent response times and efficient request handling. In
addition to web services, dedicated background processing workers ensure that
asynchronous tasks are executed smoothly without affecting frontend responsiveness.
The managed PostgreSQL database offers robust relational data management, ensuring
structured and secure information storage. Comprehensive monitoring features track
system performance metrics, allowing proactive measures for optimizing application
functionality.
Render’s infrastructure boasts automatic failover protocols that enhance regional
redundancy, preventing service disruptions. Daily backups safeguard critical data and
ensure disaster recovery preparedness, while zero-downtime deployments maintain
service continuity during updates and enhancements. Security features play a crucial role
in system integrity, with the platform's secret management system securely handling
sensitive credentials, mitigating risks associated with unauthorized access. Additionally,
environment variables enable seamless application configuration control, allowing
adaptive adjustments based on operational requirements.
Monitoring and maintenance play a crucial role in ensuring the reliability and efficiency
of a cloud-hosted backend system. Comprehensive observability tools are integrated to
provide real-time performance metrics, allowing teams to continuously track system
health and proactively identify potential bottlenecks. Error tracking mechanisms log
anomalies, providing detailed insights into failures and their root causes, which enables
rapid debugging and issue resolution. Usage analytics offer valuable data on user
interactions and system utilization, helping to refine features and optimize resource
allocation. Automated alerting systems notify administrators of critical incidents,
allowing swift action to minimize downtime. Additionally, audit trails for all data access
ensure transparency and security, enabling thorough reviews of interactions within the
system.
Render's managed infrastructure offers continuous monitoring capabilities, ensuring that
services remain operational under varying loads. Cloud hosting ensures automatic scaling
to accommodate peak traffic, preventing slowdowns and ensuring a seamless user
experience. The integration of robust security measures guarantees data protection, while
periodic system audits provide insights into performance trends and areas for
improvement. Through structured logging and reporting, anomalies can be identified in
real time, mitigating risks before they impact the user experience.
Setting up the Development Environment is essential for ensuring a robust foundation for
the real-time chat application. The development process begins with the installation and
configuration of Visual Studio 2022, which serves as the primary Integrated
Development Environment (IDE). This IDE offers comprehensive debugging
capabilities, seamless integration with version control systems, and strong support
for .NET development. The .NET 6 SDK provides a modern runtime environment
optimized for high-performance applications. Essential NuGet packages such as Firebase
Admin SDK are incorporated to facilitate backend integration, allowing secure
authentication and real-time database access. The Newtonsoft.Json package is employed
for efficient serialization and deserialization of data objects, ensuring compatibility with
JSON-based communication protocols. For cryptographic operations, BouncyCastle is
utilized to implement secure data encryption and decryption, maintaining message
integrity and confidentiality. The project follows the Clean Architecture pattern, which
ensures separation of concerns across distinct layers: Presentation (WPF), Application
(business logic), Domain (entities), and Infrastructure (Firebase integration). This
structured approach enhances maintainability, scalability, and code organization.
Environment variables play a crucial role in securing sensitive credentials. The dotenv
approach is used to store configuration details securely, ensuring that keys and tokens
remain protected from unauthorized access. A dedicated configuration manager handles
application settings dynamically, allowing runtime adjustments based on operational
needs. Docker containers are employed to create isolated development databases,
preventing conflicts between environments and streamlining database management.
Postman collections facilitate API testing during development, enabling rapid validation
of endpoint functionality and ensuring seamless data exchange between services.
The chat application's real-time update system employs an intelligent polling architecture
designed to balance responsiveness with resource efficiency. At its core, a stateful polling
service manages conversation synchronization through an adaptive timing algorithm that
responds dynamically to both network conditions and user activity patterns. The
implementation leverages WPF's DispatcherTimer as the primary scheduling mechanism,
chosen for its seamless integration with the UI thread while maintaining precise timing
control.
The polling infrastructure establishes multiple operational states that govern its behavior.
During active conversations with recent messages, the system operates in a high-
frequency mode with 500ms intervals, ensuring near-real-time message delivery. This
aggressive polling strategy activates when the application detects either user typing
activity or incoming message indicators within the current conversation view. The timer
automatically transitions to a moderate 1500ms interval during periods of passive reading
or when the application window loses focus, reducing server load while maintaining
acceptable latency.
Network quality detection forms a critical component of the adaptive timing system. The
implementation continuously monitors round-trip times and packet loss percentages
through dedicated health-check packets. When latency exceeds 300ms or packet loss
surpasses 5%, the system gradually increases polling intervals up to a maximum of
5000ms, preventing network congestion during unstable conditions. A sophisticated
recovery algorithm reduces intervals when conditions improve, using a combination of
linear backoff and statistical smoothing to avoid rapid oscillation between states.
The polling service maintains comprehensive conversation state tracking through several
key metrics. A precision timestamp registry records the last received message for each
active conversation, allowing the system to prioritize updates to currently viewed threads.
Activity heatmaps track time-of-day patterns and conversation velocity, enabling
predictive preloading of likely active discussions. These metrics feed into a priority
scoring system that dynamically allocates polling resources to the most relevant
conversations.
Message prioritization implements a multi-tiered queueing system that categorizes
updates by urgency. Standard messages follow normal polling cycles, while high-priority
notifications (including @mentions and system alerts) trigger immediate refresh requests
regardless of the current timer state. This interrupt mechanism ensures critical
information receives minimal latency while maintaining efficient polling for routine
traffic.
The technical implementation carefully manages thread affinity to prevent UI blocking.
All network operations execute on dedicated background threads from the ThreadPool,
with completion callbacks marshaled to the UI thread through Dispatcher.Invoke for safe
control updates. The system employs a concurrent message buffer that accumulates
updates between UI refresh cycles, preventing render thrashing while ensuring no
messages are missed.
Connection health monitoring operates through a secondary heartbeat timer that verifies
channel integrity every 10 seconds. Failed heartbeats trigger a graduated recovery
sequence, beginning with rapid retry attempts before eventually falling back to a full
connection reset. This subsystem integrates with WPF's built-in network change detection
to handle physical connectivity transitions seamlessly.
Resource conservation measures include intelligent polling suspension during application
minimization or prolonged user inactivity. The system detects these states through
window focus events and input monitoring, transitioning to a low-power maintenance
mode that preserves battery life on mobile devices while maintaining essential
connectivity.
Error handling incorporates multiple recovery strategies tailored to specific failure
modes. Network timeouts initiate rapid but bounded retry sequences, while authentication
failures trigger reauthorization flows. The system distinguishes between temporary
outages and permanent errors, adjusting its recovery strategy accordingly to prevent
unnecessary resource consumption during extended service interruptions.
Performance telemetry is collected throughout the polling lifecycle, including detailed
timing metrics for each operation phase. This data feeds into continuous optimization
algorithms that fine-tune interval durations and concurrency limits based on the device's
capabilities and current workload. The telemetry system itself operates with minimal
overhead, using efficient binary logging that's periodically transmitted to analytics
services.
The implementation includes specialized handling for various message types. Ephemeral
messages receive dedicated polling channels to ensure timely expiration, while large file
transfers utilize segmented polling that monitors progress without overwhelming the
network. Read receipts and typing indicators are managed through a separate low-priority
queue that batches updates to reduce server load.
Adaptive quality-of-service adjustments occur in real-time based on system resource
monitoring. When CPU or memory usage exceeds configured thresholds, the polling
system automatically enters a conservation mode that extends intervals and reduces
concurrent operations. These adjustments are logged and reported to help users
understand performance characteristics during resource-constrained operation.
The entire polling architecture is designed for testability, with dependency injection
points for all external services and detailed logging of internal state transitions. This
enables comprehensive unit testing of polling logic under various simulated network
conditions and load scenarios. Mock servers can reproduce specific timing patterns and
failure modes to verify system resilience.
Configuration options allow fine-tuning of polling parameters to accommodate different
usage scenarios. Enterprise deployments can adjust timing thresholds and concurrency
limits through group policy settings, while individual users can prioritize battery life or
responsiveness through application preferences. All configuration changes trigger
immediate runtime adjustments without requiring application restart.
The system integrates with Windows power management APIs to coordinate polling
activity with the device's energy profile. On battery-powered devices, the polling service
automatically aligns its operations with the system's power-saving mode, extending
intervals when power conservation is prioritized. This integration extends to thermal
management, reducing polling frequency during device overheating conditions.
Future extensibility is built into the architecture through a plugin system that allows
additional message types to define their own polling requirements. New conversation
formats can register specialized handlers that optimize their update strategies without
modifying the core polling infrastructure. This flexibility ensures the system can adapt to
evolving communication patterns and message formats.
The HashSet implementation wraps its core operations with fine-grained locking
mechanisms to ensure thread safety in the application's highly concurrent environment. A
ReaderWriterLockSlim instance guards all access to the underlying collection, allowing
unlimited concurrent read operations during UI rendering cycles while enforcing
exclusive access during modification operations. The lock implements a fairness policy to
prevent writer starvation during periods of heavy read activity, ensuring message updates
propagate in a timely manner.
Memory management follows a tiered strategy that optimizes resource usage based on
message age and access frequency. The primary HashSet maintains strong references to
all messages in the active conversation view plus a configurable buffer of recent
messages (typically the last 200-300 items). This working set ensures instant access to
relevant conversation history without excessive memory consumption. For older
messages beyond the immediate viewing window, the system employs weak references
that allow garbage collection when memory pressure increases, while still maintaining
the ability to reconstitute messages if needed through cache warming techniques.
The system implements a sophisticated delta processing pipeline for incoming message
batches. When new messages arrive from the polling service, they first pass through a
preprocessing stage that normalizes timestamps and verifies cryptographic signatures.
The cleaned messages then enter a comparison phase where the HashSet's containment
checks rapidly identify truly new messages requiring processing. This delta detection
typically handles batches of 20-50 messages in sub-millisecond timeframes, even for
conversations containing tens of thousands of historical messages.
An LRU caching layer sits atop the core HashSet storage, tracking access patterns to
optimize memory utilization. The cache maintains strong references to the most
frequently accessed messages regardless of their position in the conversation history,
while allowing less-frequently referenced messages to transition to weak references. This
adaptive strategy proves particularly valuable when users scroll through long
conversation histories or search for specific messages, as it automatically promotes
relevant messages to faster access tiers.
The implementation includes specialized handling for message mutations such as edits
and deletions. Rather than simple removal operations, the system maintains tombstone
records for deleted messages to preserve conversation continuity and prevent
synchronization issues. Edited messages generate new hash values while maintaining
links to their previous versions, allowing the UI to render edit histories when requested.
These special cases are handled through an extension of the core equality comparer that
understands message versioning relationships.
Event notification follows a publisher-subscriber model that efficiently communicates
message state changes to interested UI components. When the HashSet processes
additions, updates, or deletions, it raises granular events that include change type
information and affected message ranges. Subscribers can then optimize their rendering
strategies - for example, only refreshing the visible message viewport rather than the
entire conversation history. The eventing system employs a batching mechanism that
coalesces rapid successive changes into single notifications during periods of high
activity.
Performance optimization extends to the physical memory layout of stored messages. The
implementation uses structure-of-arrays techniques for message property storage when
handling large conversations, improving cache locality for common operations like
timestamp sorting or sender filtering. Frequently accessed properties such as message
timestamps and sender identifiers are stored in contiguous memory regions, while larger
payloads like message content remain in separate buffers.
The system includes comprehensive instrumentation that tracks key performance metrics
including hash collision rates, memory usage patterns, and lock contention statistics.
These metrics feed into runtime optimization algorithms that dynamically adjust
parameters like the LRU cache size or HashSet capacity to maintain optimal performance
under varying load conditions. During periods of low system resources, the
implementation can automatically transition to more memory-efficient (though slightly
slower) operating modes.
Validation and error handling form an integral part of the message processing pipeline.
All incoming messages undergo schema validation before HashSet insertion, with
malformed messages redirected to a quarantine area for inspection. The system includes
repair mechanisms for common issues like clock skew between devices or temporary
hash collisions, maintaining conversation integrity even in edge cases.
The architecture supports seamless integration with the application's undo/redo system
through message version tagging. Each modification to the HashSet is accompanied by
version metadata that allows reconstruction of previous states when needed. This
capability extends to the entire conversation history, enabling features like "view
conversation as of [date/time]" without requiring separate snapshot storage.
Future extensibility is built into the core design through a pluggable equality comparison
system. New message types can register specialized comparers that understand their
unique characteristics, while still participating in the centralized HashSet management.
This flexibility ensures the system can accommodate novel message formats without
compromising the efficiency of existing conversation types.
The pipeline initiates with rigorous examination of all cryptographic parameters before
commencing decryption operations. Initialization vectors undergo length verification
against AES-GCM specifications, while authentication tags are checked for proper
formatting and expected length. The system maintains a security log tracking parameter
validation outcomes, flagging any anomalies for administrator review. Digital signatures
attached to encrypted messages are verified using elliptic curve cryptography, ensuring
message authenticity before further processing.
Optimized Decryption Execution
The core decryption engine dynamically selects the most efficient implementation based
on hardware capabilities. Modern x86 processors utilize AES-NI instruction sets through
careful P/Invoke wrappers that maintain memory security. ARM-based devices leverage
cryptographic extensions where available, while fallback implementations use rigorously
audited BouncyCastle providers. All decryption operations occur in isolated memory
regions that are promptly zeroized after use, preventing sensitive data leakage through
memory inspection techniques.
Styling with XAML plays a crucial role in shaping the application's visual experience,
ensuring that the interface remains cohesive, intuitive, and aesthetically appealing. The
design approach embraces a centralized resource dictionary architecture, which
consolidates all styling elements for enhanced maintainability and consistency across the
application. This method allows for seamless updates to styling components without the
need for repetitive modifications across individual UI elements. By implementing this
structure, developers can efficiently manage themes and adapt the appearance of the
application dynamically.
Control templates are extensively customized to provide a uniform behavior across major
UI components, including buttons, list items, input fields, and navigation elements. These
templates incorporate predefined visual states such as hover, pressed, disabled, and
focused conditions, delivering clear interactive feedback to users. For instance, buttons
utilize subtle animations and color transitions to indicate their state changes, ensuring that
users receive immediate visual confirmation when interacting with different elements.
List items feature alternating background shades to enhance readability, while input fields
employ accent borders to indicate focus.
Beyond basic typography, custom styles extend foundational definitions with additional
properties such as padding, margins, alignment, and shadow effects. Padding and margins
are carefully calibrated to ensure adequate spacing between elements, preventing a
cluttered interface. Alignment properties dictate precise positioning for elements such as
headers, buttons, and labels, maintaining structured layouts. Subtle shadow effects
enhance depth perception, providing a modern aesthetic that differentiates active
components from background elements.
The styling system benefits from extensive design-time support within Visual Studio's
XAML designer. This integration enables developers to visualize component styling in
real time without executing the application, significantly streamlining the design
workflow. Developers can adjust properties such as color gradients, border thickness, and
animation timings while instantly previewing the effects. Additionally, data-binding
capabilities within XAML allow styles to adapt dynamically based on application state,
ensuring seamless responsiveness.
Advanced styling techniques include adaptive layouts that scale proportionally across
different screen sizes. Grid-based positioning ensures that UI components maintain
proportional alignment, regardless of resolution variations. Media queries assist in
refining layout behaviors for mobile devices, accommodating touch interactions and
scaled-down elements while preserving usability.
Performance optimizations ensure that styling transitions and animations remain smooth
without excessive resource consumption. GPU acceleration is leveraged for rendering
complex visual effects, reducing CPU load and preventing unnecessary lag. Style caching
mechanisms minimize redundant processing, ensuring that UI rendering remains efficient
even under high interaction rates. Lightweight vector assets replace traditional bitmap-
based icons, reducing memory overhead and enhancing rendering precision.
Animations within XAML styling follow best practices to ensure natural motion effects
that enhance user experience without causing distractions. Button press interactions
include subtle scaling effects, list scrolling incorporates easing motions, and modal pop-
ups feature smooth fade-ins. These animations contribute to an intuitive and engaging
interface while maintaining responsiveness.
The application seamlessly integrates a dual-theme color system that dynamically adjusts
between light and dark modes, aligning with Windows system settings to enhance user
experience. This approach ensures that users receive an optimal visual experience without
needing manual adjustments. The carefully curated color palette adheres to Fluent Design
principles, incorporating primary, secondary, and tertiary accent colors to maintain visual
harmony and accessibility across different UI states. Luminosity variations are
strategically implemented to accommodate different interaction states, such as active,
hover, disabled, and selected elements. These variations enhance contrast while
preserving a modern aesthetic.
Custom ValueConverters dynamically calculate font sizes and margins based on window
dimensions, ensuring readability remains consistent across different display
configurations. This methodology optimizes text presentation while preserving
proportional spacing between UI elements. Transition effects between states are
implemented using FluidMoveBehavior, which delivers smooth animations that maintain
natural motion flow without disrupting performance. These transitions contribute to an
engaging user experience by creating seamless interactions that enhance usability.
A high-contrast mode is integrated into the system to further accommodate users with
visual impairments. Instead of relying solely on color differentiation, this mode utilizes
texture and shape cues to improve recognition and usability. The high-contrast
configuration is activated through Windows Ease of Access settings, ensuring
compatibility with accessibility requirements. Visual assets are rendered in multiple
resolutions, automatically selecting the optimal version based on display DPI settings.
This technique optimizes visual clarity across diverse screen types, maintaining sharpness
without excessive scaling artifacts.
Notification and Reload Button System
A reload button follows Fluent Design principles, featuring a circular form factor with
rotation animations during refresh operations. The button transitions through multiple
visual states, including a normal outline style with subtle opacity, a hover state with solid
fill and slight scale increase, an active state featuring a spinning animation alongside a
progress ring, and a disabled state with reduced opacity accompanied by a tooltip
explanation. Button placement follows Fitts' Law principles, ensuring optimal
reachability by positioning it in the top-right corner of content panels. Smart reload logic
enhances the refresh experience by preserving scroll position when possible, applying
diff-based updates rather than full-page refreshes, visually confirming update completion,
and implementing exponential backoff strategies for repeated manual refreshes.
UI performance optimizations include bitmap caching for static visual elements, reducing
redundant processing and improving resource efficiency. Opacity masks replace complex
clipping operations, allowing smoother transitions and improved rendering accuracy.
Hardware-accelerated animations leverage GPU processing power, ensuring fluid motion
effects without taxing CPU resources. Asynchronous loading techniques guarantee that
non-critical visuals are processed separately, preventing interruptions in essential
operations. Interactive components conform to Microsoft's Fluent Design timing
guidelines, maintaining consistency across animations with entrance durations set to
300ms and exit transitions calibrated at 200ms. Subtle physics-based easing functions
ensure natural responsiveness, preventing abrupt movements that could disrupt user
interaction. The system includes instrumentation tools that monitor UI performance
metrics such as frame rates during animations and evaluate input response times to
optimize responsiveness under varying conditions.
By implementing a meticulously crafted color scheme, adaptive responsiveness
strategies, structured notification hierarchy, and optimized UI performance techniques,
the application delivers an intuitive and accessible user experience. The seamless
integration of Fluent Design principles ensures aesthetic consistency, while the emphasis
on usability and accessibility enhances inclusivity across diverse user demographics.
Through dynamic visual elements, efficient rendering processes, and intelligent
interaction design, the system maintains a modern interface that adapts smoothly to
evolving user requirements.
Testing Strategies
Edge cases are specifically targeted to assess encryption resilience under unusual
conditions. Empty messages undergo testing to confirm proper handling without causing
unexpected application behavior. Maximum-length inputs are evaluated to ensure system
stability, while deliberately corrupted ciphertexts serve to test error detection and secure
failure mechanisms. The system is designed to provide immediate error feedback upon
detecting any tampered or malformed data, preventing security breaches. Robust memory
safety tests are implemented using debug allocators, rigorously checking for zeroization
of sensitive buffers once they are no longer needed. These safeguards ensure that
encryption keys and decrypted messages do not persist in memory, mitigating the risk of
unauthorized access or data leakage.
Unit testing methodologies extend beyond encryption cycles to assess compatibility with
different application states. Encryption tests are performed under multiple conditions,
including high-volume message exchanges, concurrent encryption requests, and rapid
sequential operations. This ensures that the system operates effectively under real-world
scenarios without compromising performance or security. Automated testing frameworks
are employed to simulate realistic workloads, exposing potential vulnerabilities and
confirming system stability. The results are compiled into detailed reports, providing
insights into encryption behavior across different test scenarios.
These testing strategies collectively reinforce the reliability of the encryption subsystem,
ensuring that all cryptographic operations adhere to stringent security standards. By
implementing rigorous validation measures, performance monitoring, and comprehensive
error handling, the system is designed to withstand a wide range of operational demands
while safeguarding sensitive data. Continuous testing and optimization enhance
encryption robustness, maintaining a secure communication platform that upholds data
integrity and confidentiality across all interactions.
Firebase integration testing ensures the reliable functionality of the backend system under
realistic conditions by validating end-to-end processes within a controlled test
environment. The Firebase Console hosts a dedicated test project where various
components are examined for correctness, efficiency, and resilience. The test suite covers
over fifty distinct scenarios, each designed to evaluate user authentication flows, real-
time database synchronization, and Firestore CRUD operations. These tests measure both
functional accuracy and performance indicators such as synchronization latency, ensuring
that database updates occur within acceptable timeframes. By examining a broad
spectrum of potential user interactions, the testing process guarantees that the system
remains robust and adaptable to real-world demands.
Network condition simulation plays a crucial role in assessing performance under adverse
circumstances. Tools such as Clumsy allow controlled network degradation to mimic
real-world network instability, exposing potential weak points in communication between
the application and Firebase services. Various conditions are simulated, including packet
loss ranging from zero to twenty percent, latency spikes varying from zero to five
hundred milliseconds, and intermittent connectivity disruptions. These tests reveal how
the system responds to fluctuating network quality, providing valuable insights into
recovery mechanisms and optimization strategies.
Security testing is an integral part of the Firebase integration suite, focusing on enforcing
Firestore rules and preventing unauthorized operations. The system is tested rigorously
by attempting unauthorized database modifications from designated test clients, ensuring
that security policies remain impenetrable. These validation procedures confirm that
access control mechanisms properly restrict actions based on predefined user roles and
permissions. In addition to structural security assessments, penetration tests detect
vulnerabilities that could allow unintended data exposure or privilege escalation,
reinforcing application security across all operational layers.
Recovery testing evaluates the system’s ability to handle disruptions gracefully. Various
failure scenarios are analyzed, including expired authentication tokens, offline writes that
require synchronization upon reconnection, and conflicting concurrent edits. When
authentication tokens expire, the system ensures that users are prompted to refresh
credentials before proceeding, preventing unauthorized access. Offline write operations
undergo deferred synchronization, preserving data integrity while efficiently updating
records once connectivity is restored. Concurrent edit conflict resolution strategies
prioritize consistency by merging changes or prompting user intervention when
necessary. These tests verify that recovery protocols are capable of maintaining stability
even under unpredictable conditions.
Test data management is handled efficiently through Firebase Admin SDK cleanup
scripts. Once test runs conclude, data purging procedures remove redundant records to
maintain a clutter-free environment. This practice minimizes residual storage overhead,
ensuring the test project remains lightweight and reflective of current evaluations. By
automating cleanup operations, test cycles remain structured without accumulating
excessive data artifacts.
Through this integration testing framework, Firebase services undergo rigorous
evaluation to uphold reliability, security, and efficiency under varied conditions. The
combination of authentication validations, real-time synchronization assessments,
security rule enforcement, and resilience testing provides a comprehensive approach to
verifying system integrity. Performance benchmarks further enhance optimization
strategies, ensuring the backend infrastructure performs efficiently and securely across
diverse user scenarios.
Each test follows a structured sequence, beginning with the initialization of the test state
to replicate an authentic application environment. This is followed by the execution of UI
interactions, including user-driven actions such as selecting buttons, entering text, or
adjusting scroll positions. Post-execution validation assesses logical consistency and
visual accuracy, confirming correct responses to user interactions. Finally, cleanup
processes remove temporary data to maintain a stable test environment, preventing
residual effects from influencing subsequent tests.
Performance testing assesses the rendering efficiency of complex views populated with
extensive data sets, such as lists containing over 1,000 items. The system measures frame
rates during scrolling and animation sequences, ensuring a consistent refresh rate of 60fps
to maintain a fluid user experience. Automated screenshot capture detects unintended
visual regressions, using perceptual diff algorithms to identify discrepancies between
expected and actual renderings. Input tests encompass multiple interaction methods,
including touch gestures, mouse actions, keyboard inputs, and stylus engagement,
ensuring universal usability across varying input devices.
The test automation infrastructure operates with a dedicated test runner, facilitating
streamlined execution across diverse test categories. Parallel test execution optimizes
runtime efficiency by isolating resources, preventing interference between simultaneous
tests. Configurable test suites accommodate different test scenarios, allowing developers
to focus on specific areas of UI validation. Live progress reporting provides insights into
test execution status and logs real-time data for troubleshooting. Flaky tests undergo
automatic retry cycles, ensuring consistent validation of occasionally unstable cases.
Historical performance trending enables long-term monitoring of UI responsiveness,
identifying gradual deviations in rendering times or interaction lag. Code coverage
analysis evaluates the effectiveness of test implementations, aiming for an extensive
coverage threshold of 85% or higher.
For verification purposes, each dependency entry includes a hashed representation of the
complete dependency tree generated by pip-compile. This ensures integrity and prevents
inadvertent version mismatches. Continuous integration (CI) pipelines incorporate
automated validation of the requirements file, proactively detecting any conflicts or
missing packages before deployment. These checks prevent deployment failures and
ensure seamless operation across staging, testing, and production environments.
The deployment configuration follows the 12-factor app methodology, ensuring clarity
and scalability. The Procfile plays a crucial role in defining application processes,
categorizing operations into distinct roles that optimize workload distribution. These
processes include web services, which utilize Gunicorn workers for managing incoming
requests efficiently, background tasks executed via Celery for asynchronous processing,
and scheduled jobs managed by APScheduler for routine maintenance tasks. By clearly
delineating these roles, the system ensures resource allocation is both optimal and
predictable. This structured approach allows different components to operate
independently while maintaining coordinated execution across services.
Security measures are embedded into the deployment pipeline, incorporating validation
against known vulnerabilities. Dependency scanning tools analyze requirements.txt for
exposure to security risks, proactively preventing exploitation. By leveraging automated
monitoring tools, the infrastructure maintains a proactive stance on security, promptly
addressing any concerns before they impact system integrity.
This maintenance and deployment framework ensures that the project remains reliable,
scalable, and secure throughout its lifecycle. By implementing structured validation
processes, automated testing, environment-specific configuration management, and
security integrations, the system achieves a streamlined approach to maintaining
operational efficiency while supporting continuous improvements and adaptability.
The Render deployment flow ensures a seamless transition between application versions
using a sophisticated zero-downtime blue-green strategy. This approach minimizes
service interruptions and optimizes reliability by maintaining both the previous and new
versions in parallel until the rollout completes successfully. The deployment process
begins with automatic infrastructure provisioning, utilizing Render’s infrastructure-as-
code templates to define essential services such as compute instances, PostgreSQL
databases, Redis caching, and managed cron jobs. These configurations enable a
consistent deployment environment across all instances, reducing variability and potential
misconfigurations.
Each code push initiates a container build sequence designed to maintain security and
stability. The process incorporates security scanning through Snyk to detect
vulnerabilities, dependency auditing via pip-audit to ensure package integrity, and
comprehensive unit test execution to verify application correctness. These validations
guarantee that only thoroughly vetted builds progress to the next stage, reducing the
likelihood of introducing errors or security flaws into the production environment.
Following successful validation, new containers undergo a controlled canary deployment,
where they serve approximately five percent of live production traffic. This phased
rollout allows for real-world validation of the latest application version in a limited scope,
mitigating risks associated with widespread deployment failures. Health checks
continuously monitor key performance indicators, such as response latency and error
rates, assessing the stability of the new containers. If predefined thresholds are exceeded,
automatic rollback mechanisms restore the previous version, preventing widespread
disruptions.
A full rollout is executed gradually over fifteen minutes, adjusting load balancer weights
dynamically to shift incoming traffic towards the new instances. This gradual traffic
migration ensures smooth adoption while mitigating potential performance bottlenecks.
During this phase, detailed monitoring systems track application behavior, promptly
identifying and addressing anomalies that could arise from the transition.
This structured deployment flow enhances service reliability, reduces downtime risks,
and optimizes application performance. By integrating automated provisioning, rigorous
security checks, gradual rollout mechanisms, and proactive monitoring, the Render
deployment pipeline maintains operational efficiency and seamless updates without
disrupting active users.
The update and bug fix strategy employs a structured maintenance approach guided by
semantic versioning to ensure stability and transparency. Updates follow a systematic
schedule, with major updates released quarterly, introducing significant new features and
architecture changes. Minor updates occur monthly, refining existing functionality while
adding incremental improvements. Patch releases address urgent issues weekly, although
critical fixes are deployed immediately when necessary to minimize disruption. This
structured approach ensures that development remains consistent while maintaining
flexibility for emergency fixes.
Hotfix branches enable rapid patching without interrupting the standard release cycles.
These branches isolate emergency fixes, allowing quick deployment while ongoing
feature development continues uninterrupted. This dual-track development ensures that
critical bugs are resolved promptly without delaying long-term enhancements. Each
update includes structured versioned database migrations to maintain data integrity while
introducing schema changes. Backward-compatible API modifications prevent
disruptions to existing integrations, allowing external services to continue functioning
seamlessly. Comprehensive changelogs accompany every deployment, outlining
modifications in detail to facilitate understanding among developers and users.
Monitoring plays a vital role in maintaining system health, spanning three key metrics:
performance baselines, business activity, and infrastructure stability. Performance
monitoring tracks response times across the 95th percentile to detect deviations from
expected latency values. Business metrics focus on active conversations, evaluating user
engagement trends and interaction rates to measure service effectiveness. System health
assessments monitor queue depths, identifying bottlenecks that may impact processing
efficiency. Alert thresholds dynamically adjust based on time-of-day patterns and known
traffic fluctuations, preventing false alarms while ensuring timely identification of
genuine issues.
By implementing this structured update and bug fix strategy, the development process
remains stable while allowing rapid resolution of critical issues. The integration of
transparent communication channels, well-defined triage classifications, automated
rollback capabilities, and proactive monitoring ensures the application remains reliable,
scalable, and responsive to evolving user needs. This comprehensive approach supports
long-term sustainability while maintaining a consistent development velocity, ensuring
that both planned updates and emergency fixes align with best practices in software
maintenance.
Release branches originate from the develop branch and undergo additional integration
testing before merging into production. This structured approach allows features and
improvements to be thoroughly validated before reaching the main branch. Integration
testing ensures that modifications align with system requirements, reducing the likelihood
of post-deployment issues. By maintaining a dedicated release branch workflow,
developers gain flexibility in preparing stable versions while concurrently working on
new enhancements in the develop branch.
Hotfix branches deviate from the main branch when urgent fixes are required. These
branches serve to address critical issues that demand immediate resolution, ensuring swift
patches without disrupting ongoing development efforts. Mandatory backporting of
hotfixes to the develop branch ensures consistency across code versions, preventing
discrepancies between production and development environments. This practice
minimizes technical debt and maintains alignment between deployed and evolving
codebases.
By integrating Git Flow with disciplined review, testing, and documentation practices,
the development workflow remains agile while safeguarding system stability. This
strategy enables efficient collaboration, minimizes deployment risks, and ensures a
streamlined process for managing code updates. Through automated validations,
controlled branching mechanisms, and thorough documentation, the repository maintains
reliability and transparency across all development efforts.
Monitoring and observability play a crucial role in ensuring the stability and performance
of production deployments. The application incorporates comprehensive instrumentation
to track real-time system behavior and detect anomalies before they impact users.
Structured application logs are formatted as JSON, allowing for efficient parsing and
analysis. Each log entry includes detailed request tracing, enabling developers to follow
execution paths across different services and components. This structured approach
facilitates debugging and enhances transparency by maintaining a historical record of
system interactions. Logs are automatically indexed and stored in a centralized system,
making retrieval fast and scalable for troubleshooting efforts.
Metrics collection is handled by Prometheus, which monitors more than fifty system
indicators, ranging from CPU and memory utilization to request latency and database
query times. These indicators provide valuable insights into application health and
performance trends. Custom metrics extend observability by tracking domain-specific
indicators such as user activity rates, API response times, and backend processing
efficiency. Prometheus operates alongside alerting mechanisms that notify engineers of
abnormal behavior, ensuring proactive resolution of potential issues before they escalate
into outages.
Real-user monitoring provides session replays that capture live interactions, revealing
usage patterns and areas for improvement. These replays help identify friction points,
ensuring that the UI remains intuitive and responsive. Synthetic transactions complement
user monitoring by simulating critical workflows, such as payment processing and
account management, to verify functionality under controlled conditions. These tests run
periodically to validate user journeys, catching regressions before they impact customers.
Capacity planning ensures that the deployment infrastructure remains scalable and
efficient under varying workloads. Autoscaling rules adjust resource allocation based on
CPU, memory, and queue metrics, ensuring optimal performance during traffic surges.
Scheduled scaling patterns accommodate known usage fluctuations, optimizing
infrastructure costs while maintaining readiness for high-demand periods. Load testing is
conducted before major releases to simulate peak traffic conditions and verify system
resilience. These tests measure response times, database throughput, and caching
efficiency to prevent degradation under increased load. Monthly cost optimization
reviews analyze resource consumption trends, ensuring that infrastructure expenses
remain within budget without compromising availability.
On the client side, robust implementations will guarantee high availability and
uninterrupted communication. Connection state management will incorporate automatic
reconnection strategies, utilizing exponential backoff logic starting at one second and
extending up to thirty seconds during persistent failures. This mechanism ensures a
graceful recovery from network disruptions, preventing unnecessary reconnection
attempts while optimizing response times. To further improve performance, message
batching will consolidate multiple updates within configurable windows ranging from 50
to 200 milliseconds, reducing redundant transmissions and lowering bandwidth usage.
The server infrastructure will scale horizontally through Redis backplanes, facilitating
state synchronization across distributed nodes. This scalability model will enable each
server instance to handle upwards of 10,000 concurrent connections, ensuring smooth
interactions even during high-traffic conditions. Redis will be instrumental in maintaining
consistency across connection states, minimizing latency in synchronization processes
and enhancing real-time data propagation.
Message persistence strategies will support offline clients, ensuring that undelivered
messages remain accessible for up to thirty days. This guarantees continuity in
conversations even when users temporarily disconnect, preserving critical data without
requiring immediate retrieval. Additionally, detailed connection analytics will track
metrics such as message rates and error occurrences, providing insights into system
health and optimizing performance tuning.
Through these future enhancements, the platform will achieve a transformative shift
toward real-time, intelligent, and adaptive communication. The SignalR integration will
not only improve responsiveness but also ensure a highly scalable, fault-tolerant
infrastructure capable of supporting a growing user base. This modernization effort will
redefine interaction efficiency while maintaining the robust architecture necessary for
sustained long-term operation.
To foster inclusivity, accessibility features will be incorporated into the media system,
including dedicated streams for sign language interpreters. This ensures effective
communication for individuals who rely on visual interpretation of spoken language,
reinforcing the platform’s commitment to equitable user experiences. By implementing
these advanced enhancements, the real-time media subsystem will provide a high-quality,
scalable, and secure communication infrastructure suitable for a wide range of use cases,
from casual conversations to professional virtual meetings.
File Compression Before Transfer
The transfer protocol will employ chunked uploads, dividing files into manageable one-
megabyte segments, each accompanied by checksums to ensure data integrity during
transmission. Parallel transfers will be supported with three to five simultaneous
connections, optimizing speed while balancing resource consumption. Progressive
decompression techniques will allow stream processing, enabling users to access file
contents as they are received, reducing wait times. Delta updates will refine revision
handling by transferring only modified portions of a file rather than requiring complete
reuploads, minimizing bandwidth usage.
With these enhancements, the file transfer subsystem will deliver a streamlined and
intelligent compression model, ensuring reduced data loads, improved transmission
speeds, and high-quality file preservation. The combination of adaptive compression
techniques, efficient transfer mechanisms, and customizable client-side controls will
create a robust framework capable of handling modern file-sharing demands effectively.
In the second quarter, the Media Alpha phase will introduce experimental real-time
media capabilities. The initial implementation will support one-on-one calling in beta
mode, allowing controlled testing and refinement. Echo test tools will help diagnose
audio and network conditions, ensuring optimal call clarity before the wider rollout.
Network diagnostics tools will analyze connection stability, monitoring latency
fluctuations, packet loss rates, and jitter effects to fine-tune media transmission
algorithms.
The third quarter will bring the Compression Suite, implementing an advanced file
compression engine designed to improve transfer efficiency while preserving quality. The
transfer manager UI will provide users with intuitive controls to oversee compression and
transmission processes, ensuring transparency in file handling. Format support plugins
will be introduced, expanding compatibility across various file types, including text,
images, documents, and media, enabling optimized compression specific to each format.
The final integration phase in the fourth quarter will consolidate all enhancements into a
unified framework. A newly integrated notification system will streamline real-time
alerts, consolidating interactions across multiple services. End-to-end performance
optimization will refine resource allocation and network efficiencies, ensuring
responsiveness under peak usage conditions. Enterprise deployment packages will
provide tailored solutions for scaling infrastructure and supporting larger user bases,
optimizing connectivity across extensive networks.
This phased technical integration ensures a structured rollout while prioritizing stability,
scalability, and usability. By aligning development milestones with thorough
documentation, user education, and administrative oversight, the roadmap establishes a
robust foundation for continuous improvement and sustained operational excellence. The
structured approach guarantees smooth transitions while maximizing feature impact,
refining real-time communication, media handling, compression efficiency, and overall
system responsiveness.
REFERENCES
6. Opus Audio Codec Specifications – Technical details on Opus for real-time audio
streaming:
[https://opus-codec.org/docs/](https://opus-codec.org/docs/)
10. Brotli Compression Algorithm– Google's Brotli format for text compression:
[https://brotli.org/](https://brotli.org/)
12. Git Flow & Version Control – Explanation of Git Flow methodology:
[https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow]
(https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow)
Sample Code Snippets
SignUp::-
namespace Connectt
{
/// <summary>
/// Interaction logic for SignUp.xaml
/// </summary>
public partial class SignUp : Window
{
private static readonly HttpClient client = new HttpClient();
readonly private string otpapi = "https://connect-api-4.onrender.com/register";
readonly private string VerifyAPI = "https://connect-api-4.onrender.com/verify";
static private string name = null;
static private string gml = null;
private static int pointt = 0;
public SignUp()
{
InitializeComponent();
}
if (response.IsSuccessStatusCode)
{
ErrorTextBlock.Text = "OTP sent to your Gmail. Please enter it below.";
ErrorTextBlock.Visibility = Visibility.Visible;
OTPLabel.Visibility = Visibility.Visible;
OTPBox.Visibility = Visibility.Visible;
VerifyButton.Visibility = Visibility.Visible;
}
else
{
return response;
}
if (string.IsNullOrEmpty(otp))
{
ErrorTextBlock.Text = "Please enter OTP.";
ErrorTextBlock.Visibility = Visibility.Visible;
return;
}
try
{
var content = new StringContent(JsonConvert.SerializeObject(verifyData),
Encoding.UTF8, "application/json");
var client = new HttpClient();
var response = await client.PostAsync(VerifyAPI, content);
if (response.IsSuccessStatusCode)
{
ErrorTextBlock.Text = "Verification successful! Redirecting...";
ErrorTextBlock.Visibility = Visibility.Visible;
File.WriteAllText("log", name);
new MainWindow().Show();
this.Close();
}
else
{
var resText = await response.Content.ReadAsStringAsync();
ErrorTextBlock.Text = "Verification Failed: " + resText;
ErrorTextBlock.Visibility = Visibility.Visible;
}
}
catch (Exception ex)
{
ErrorTextBlock.Text = "Error: " + ex.Message;
ErrorTextBlock.Visibility = Visibility.Visible;}}}}
Main Window::-
using System.Text;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Data;
using System.Windows.Documents;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Navigation;
using System.Windows.Shapes;
using System.IO;
using System.Runtime.CompilerServices;
namespace Connectt
{
/// <summary>
/// Interaction logic for MainWindow.xaml
/// </summary>
public partial class MainWindow : Window
{
public MainWindow()
{
if (!File.Exists("log"))
{
new SignUp().Show();
this.Close();
return;
}
Session.name = File.ReadAllText("log").Trim();
Load();
InitializeComponent();
}
public async void Load()
{
Connectt.Session2 s2 = new Session2();
await s2.LoadIncomingRequests();
await s2.LoadFriends();
}
private void FriendRequestButton_Click(object sender, RoutedEventArgs e)
{
FriendRequestControl requestControl = new FriendRequestControl();
MainContent.Content = requestControl;
}
private void ChatButton_Click(object sender, RoutedEventArgs e)
{
MainContent.Content = new ChatUserControl();
}
}
private void ReloadButton_Click(object sender, RoutedEventArgs e)
{
Load();
}
}
}
Session::-
using Newtonsoft.Json;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using static Connectt.FriendRequestControl;
using System.Windows;
using System.Collections.ObjectModel;
using System.Net.Http;
namespace Connectt
{
public static class Session
{
public static string name { get; set; }
public static ObservableCollection<FriendRequestModel>? FriendRequests { get;
set; }
public static ObservableCollection<FriendModel>? Friends { get; set; }
}
public class FriendModel
{
public string? Name { get; set; }
public string? Id { get; set; }
}
public class Session2
{
public Session2()
{
FriendRequestControl.FriendRequests = new
ObservableCollection<FriendRequestModel>();
try
{
var response = await
client.GetAsync($"https://connect-api-4.onrender.com/get_requests?name={user}");
if (response.IsSuccessStatusCode)
{
string json = await response.Content.ReadAsStringAsync();
var requests = JsonConvert.DeserializeObject<string[]>(json);
FriendRequests.Clear();
foreach (var name in requests)
{
FriendRequests.Add(new FriendRequestModel { Name = name, Id =
name });
}
}
}
}
catch (Exception ex)
{
MessageBox.Show("Could not load requests: " + ex.Message);
}
}
if (friends != null)
{
Session.Friends = new ObservableCollection<FriendModel>(
friends.Select(f => new FriendModel { Name = f, Id = f })
);
}
}
}
catch (Exception ex)
{
MessageBox.Show("Could not load friends: " + ex.Message);
}
}
}
}
Friend Request Management::-
namespace Connectt
{
public partial class FriendRequestControl : UserControl
{
public class FriendRequestModel
{
public string ?Name { get; set; }
public string? Id { get; set; }
}
Session2 s2;
public FriendRequestControl()
{
InitializeComponent();
// FriendRequests = new ObservableCollection<FriendRequestModel>();
RequestsListView.ItemsSource = FriendRequests;
// RequestsListView.ItemsSource = Session2.FriendRequests;
//LoadIncomingRequests();
}
if (string.IsNullOrEmpty(receiverName))
{
MessageBox.Show("Please enter a name.");
return;
}
try
{
if (result.IsSuccessStatusCode)
{
MessageBox.Show("Friend request sent.");
}
else
{
MessageBox.Show("Failed to send request.");
}
}
catch (Exception ex)
{
MessageBox.Show("Error: " + ex.Message);
}
}
try
{
var response = await
client.GetAsync($"https://connect-api-4.onrender.com/get_requests?name={user}");
if (response.IsSuccessStatusCode)
{
string json = await response.Content.ReadAsStringAsync();
var requests = JsonConvert.DeserializeObject<string[]>(json);
if (requests != null && requests.Any())
{
FriendRequests.Clear();
foreach (var name in requests)
{
FriendRequests.Add(new FriendRequestModel { Name =
name ,Id=name});
}
}
}
}
catch (Exception ex)
{
MessageBox.Show("Could not load requests: " + ex.Message);
}
}
MessageBox.Show("Request accepted.");
s2 = new Session2();
await s2.LoadIncomingRequests();
return;
}
Chatting and Friend List Window::-
namespace Connectt
{
public partial class ChatUserControl : UserControl
{
private string? selectedFriend;
private DispatcherTimer messageTimer;
await ReloadMessagesSafely(capturedFriendName);
};
messageTimer.Start();
}
if (response.IsSuccessStatusCode)
{
string json = await response.Content.ReadAsStringAsync();
var messages = JsonConvert.DeserializeObject<List<ChatMessage>>(json);
displayedMessages.Add(messageKey);
border.Child = textBlock;
MessagesPanel.Children.Add(border);
}
}
}
public ChatUserControl()
{
InitializeComponent();
FriendsList.ItemsSource = Session.Friends;
}
if (response.IsSuccessStatusCode)
{
string json = await response.Content.ReadAsStringAsync();
var messages = JsonConvert.DeserializeObject<List<ChatMessage>>(json);
File.WriteAllText($"messages_{friendName}.txt", string.Empty);
border.Child = textBlock;
MessagesPanel.Children.Add(border);
File.AppendAllText($"messages_{friendName}.txt", $"{prefix}:
{decryptedMessage}{Environment.NewLine}");
}
}
}
namespace Connectt
{
public static class CryptoHelper
{
private static readonly string key = "<KEY>";
private static readonly string iv = "<IV>";