high-performance-storage-jobs-in-gurgaon, Gurgaon

122 High Performance Storage Jobs in Gurgaon

Toggle to save search
posted 6 days ago
experience5 to 10 Yrs
location
Noida
skills
  • monitoring
  • upgrade
  • cluster
  • openshift
  • logging
  • prometheus
  • grafana
Job Description
Job Title: OCP L3 Support EngineerLocation: NoidaExperience Level: 5-10 years Role Overview The OCP L3 Support Engineer installs, configures, and maintains OpenShift clusters to ensure high availability, reliability, and optimal performance of containerized environments. This hands-on role focuses on advanced troubleshooting, observability setup with Prometheus/Grafana/Alertmanager, cluster upgrades, security hardening, and backup/recovery operations. Collaboration with development teams supports seamless CI/CD deployments via Quay registry and related tools. Key Responsibilities Install, configure, and upgrade OpenShift clusters, including logging, monitoring (Prometheus, Grafana, Alertmanager), and observability configurations. Perform cluster security assessments, implement RBAC/policies, and manage Quay registry for container images. Troubleshoot complex L3 issues across cluster installation, networking, storage, and application deployments; conduct root cause analysis. Handle backups, disaster recovery, and high-availability setups; optimize cluster performance and resource utilization. Support CI/CD pipelines with containerization tools; document procedures and train L1/L2 teams on best practices. Required Skills Expertise in OpenShift/Kubernetes: cluster installation, upgrades, logging, Prometheus/Grafana monitoring, Alertmanager, security, backup/recovery. Proficiency in containerization (Docker/Podman), Quay registry, CI/CD tools (Jenkins/ArgoCD), and troubleshooting. Strong analytical skills for performance tuning and issue resolution in production environments. Qualifications Education: B.Tech in Computer Science or related field. Experience: 5-10 years in OpenShift administration/support, with hands-on L3 exposure. Interview: Enrichment AI Interview Agentic.
INTERVIEW ASSURED IN 15 MINS

Top Companies are Hiring in Your City

For Multiple Roles

Jio Platforms Ltd
Jio Platforms Ltdslide-preview-Genpact
posted 7 days ago

Postgresql DBA

LTIMindtree Limited
LTIMindtree Limited
experience8 to 12 Yrs
location
Noida, Bangalore+5

Bangalore, Chennai, Hyderabad, Kolkata, Pune, Mumbai City

skills
  • flexible
  • azure
  • server
  • postgresql
Job Description
"13 years of IT experience and minimum of 5 years of experience on PostgreSQL database administration Should have worked with the team size of above 10 members technically guided the team Willing to provide 24x7 support flexible to work in the shiftsPrimary skillsKEY RESPONSIBILITIESPostgreSQLPlanning for Production Storage and Capacity ManagementCreate logical models and build physical modelsPlanning for Production Storage and Capacity ManagementCreation managing migrating PostgreSQL schema tables tablespaces and users roles group in 10x 11x12x13x 14x 15x to Azure Cloud including PaaS Azure database for PostgreSQL single flexible serversPerformance tuning Query tuning by generating and explaining plan for SQL queriesExpert in DBA functions like managing databases backup restore scheduling jobs space performance management principles of database replication scalability and high availabilityExpert in Backup Recovery including Pgdump pgdumpall setting up WAL archiving Point in time recoveryAnalyze recommend and implement configuration for Optimized PerformanceAbility to plan hardware capacity requirements for storage memory and CPU CyclesAbility to configure and troubleshoot log shipping cascading standbys partial replication continuous recovery detach and reattach slavesUnderstanding of UNIX environment with shell scriptingTrouble shooting and resolving database issuesThis role may entail shift afterhours support on an oncall basisHADR Setup including logical physical standbyConforming to client compliances and expectationsThis role may entail shift afterhours support on an oncall basisGood working experience all HA and DR solutionsMandatory Technical skills requiredMigrating onpremise PostgreSQL to Azure CloudMigrating Azure database for PostgreSQL Single server to Migrating Azure database for PostgreSQL Flexible serverSecurity setup Enabling Configuring SSL Certificate Authentication to encrypt data transfer"
INTERVIEW ASSURED IN 15 MINS
posted 2 days ago
experience12 to 16 Yrs
location
Noida, Uttar Pradesh
skills
  • Linux
  • Ansible
  • Python
  • VMware
  • AWS
  • KVMbased virtualization
  • Apache CloudStack
  • InfrastructureasCode
  • Networking concepts
  • Storage technologies
  • Virtualization architectures
Job Description
As a Virtualization Engineer, you will be responsible for designing, deploying, and managing KVM-based virtualization environments to meet organizational requirements. You will leverage Apache CloudStack for orchestration, configuration management, and provisioning of virtual resources. Your key responsibilities will include: - Developing and maintaining automation scripts using Ansible and Python to automate tasks like provisioning, monitoring, and configuration management. - Implementing Infrastructure-as-Code principles to ensure consistency and repeatability in deployments. - Monitoring virtualization infrastructure performance, identifying bottlenecks, and optimizing resource allocation. - Proactively ensuring high availability, scalability, and disaster recovery readiness in virtualized environments. - Implementing security best practices and compliance measures, conducting regular security audits, and vulnerability assessments. - Providing technical support and troubleshooting expertise for complex virtualization issues, collaborating with cross-functional teams to resolve incidents. To qualify for this position, you should: - Hold a Bachelor's degree in Computer Science, Engineering, or a related field, or possess equivalent practical experience. - Have a minimum of 12+ years of experience in designing and managing KVM-based virtualization environments. - Demonstrate hands-on expertise with Apache CloudStack for orchestration and management of virtualized infrastructure. - Possess strong scripting skills in Ansible and Python for infrastructure automation. - Be familiar with VMware virtualization technologies and have experience deploying and managing virtualized instances on AWS cloud. - Have a solid understanding of networking concepts, storage technologies, and virtualization architectures. - Be able to troubleshoot complex technical issues independently and provide effective solutions. - Demonstrate excellent communication skills and the ability to collaborate effectively within a team environment. - Possess relevant certifications such as RHCE, AWS Certified Solutions Architect, or similar credentials. Omitting additional details of the company as it is not present in the provided job description.,
ACTIVELY HIRING
question

Are these jobs relevant for you?

posted 2 days ago
experience3 to 7 Yrs
location
Noida, Uttar Pradesh
skills
  • Kubernetes
  • PostgreSQL
  • Elasticsearch
  • Distributed systems
  • Data modeling
  • Python
  • Data processing
  • Kotlin
  • APIdriven backend services
  • Object storage systems
  • Software scalability principles
  • SDK usage
  • Terraform
  • Prometheus
  • Backend performance testing
Job Description
As a Senior Backend Engineer with expertise in Kotlin, you will be joining a newly established scrum team to enhance a core data contextualization platform. This platform is crucial for associating and matching data from various sources into a unified data model. Your role will involve leading backend development efforts to modernize and scale the platform, ensuring compatibility with updated data architecture and orchestration framework. This is a high-impact role contributing to a long-term roadmap focused on scalable, maintainable, and secure industrial software. **Key Responsibilities:** - Design, develop, and maintain scalable, API-driven backend services using Kotlin. - Align backend systems with modern data modeling and orchestration standards. - Collaborate with engineering, product, and design teams for seamless integration across the broader data platform. - Implement and refine RESTful APIs following established design guidelines. - Participate in architecture planning, technical discovery, and integration design for improved platform compatibility and maintainability. - Conduct load testing, improve unit test coverage, and contribute to reliability engineering efforts. - Drive software development best practices including code reviews, documentation, and CI/CD process adherence. - Ensure compliance with multi-cloud design standards and use of infrastructure-as-code tooling (Kubernetes, Terraform). **Qualifications:** - 3+ years of backend development experience, with a strong focus on Kotlin. - Proven ability to design and maintain robust, API-centric microservices. - Hands-on experience with Kubernetes-based deployments, cloud-agnostic infrastructure, and modern CI/CD workflows. - Solid knowledge of PostgreSQL, Elasticsearch, and object storage systems. - Strong understanding of distributed systems, data modeling, and software scalability principles. - Excellent communication skills and ability to work in a cross-functional, English-speaking environment. - Bachelor's or Master's degree in Computer Science or related discipline. **Bonus Qualifications:** - Experience with Python for auxiliary services, data processing, or SDK usage. - Knowledge of data contextualization or entity resolution techniques. - Familiarity with 3D data models, industrial data structures, or hierarchical asset relationships. - Exposure to LLM-based matching or AI-enhanced data processing (not required but a plus). - Experience with Terraform, Prometheus, and scalable backend performance testing. The role involves developing Data Fusion, a robust SaaS for industrial data, and solving concrete industrial data problems by designing and implementing APIs and services. You will collaborate with application teams to ensure a delightful user experience, work with distributed open-source software, databases, and storage systems. The company, GlobalLogic, offers a culture of caring, learning, and development opportunities, interesting work, balance, and flexibility. Join the team to be part of transforming businesses and redefining industries through intelligent products and services.,
ACTIVELY HIRING
posted 2 weeks ago
experience8 to 12 Yrs
location
Noida, Uttar Pradesh
skills
  • GCP
  • Cloud Storage
  • Python
  • SQL
  • Data Modeling
  • SPARQL
  • Apigee
  • BigQuery
  • Dataflow
  • Cloud Run
  • PubSub
  • CICD
  • Schemaorg
  • RDFJSONLD
  • Cloud Data Fusion
Job Description
As a Technical Architect at CLOUDSUFI, you will be responsible for leading the design and implementation of scalable, high-performance data architectures. Your main focus will be on building a unified and intelligent data ecosystem that enables seamless integration of diverse public datasets into our Knowledge Graph using Google Cloud Platform (GCP). To excel in this role, you will need to have a deep understanding of modern data architectures, cloud-native services, and best practices in data integration, knowledge graph modeling, and automation frameworks. **Key Responsibilities:** - Define and own the end-to-end architecture for data ingestion, transformation, and knowledge graph integration pipelines. - Establish best practices for schema modeling, metadata management, and data lineage across GCP environments. - Provide technical leadership and mentorship to Data Engineers and Automation teams. - Architect large-scale, automated ETL/ELT frameworks leveraging GCP services such as Dataflow, BigQuery, Pub/Sub, Cloud Run, and Cloud Storage. - Design schema mapping and entity resolution frameworks using LLM-based tools for auto-schematization and intelligent data classification. - Define standards for integrating datasets into the Knowledge Graph, including schema.org compliance, MCF/TMCF file generation, and SPARQL endpoints. - Establish data quality, validation, and observability frameworks (statistical validation, anomaly detection, freshness tracking). - Implement governance controls to ensure scalability, performance, and security in data integration workflows. - Partner with the Automation POD to integrate AI/LLM-based accelerators for data profiling, mapping, and validation. - Drive innovation through reusable data frameworks and automation-first architectural principles. - Collaborate closely with cross-functional teams (Engineering, Automation, Managed Services, and Product) to ensure architectural alignment and delivery excellence. - Influence and guide architectural decisions across multiple concurrent projects. **Qualifications and Experience:** - **Education:** Bachelors or Masters in Computer Science, Data Engineering, or a related field. - **Experience:** 8+ years of experience in Data Engineering or Architecture roles, including 3+ years in GCP-based data solutions. - **Technical Expertise:** Must Have: GCP (BigQuery, Dataflow/Apache Beam, Cloud Run, Cloud Storage, Pub/Sub), Python, SQL, Data Modeling, CI/CD (Cloud Build). Good to Have: SPARQL, Schema.org, RDF/JSON-LD, Apigee, Cloud Data Fusion, and knowledge graph concepts. Strong knowledge of data governance, metadata management, and data lineage frameworks. **Preferred Qualifications:** - Experience working with LLM-based or AI-assisted data processing tools (e.g., auto-schematization, semantic mapping). - Familiarity with open data ecosystems and large-scale data integration initiatives. - Exposure to multilingual or multi-domain data integration. - Strong analytical and problem-solving ability. - Excellent communication and stakeholder management skills. - Passion for mentoring teams and driving best practices. This position requires you to have experience working with US/Europe based clients in an onsite/offshore delivery model, strong verbal and written communication skills, technical articulation, listening and presentation skills, effective task prioritization, time management, internal/external stakeholder management skills, quick learning ability, team player mindset, experience of working under stringent deadlines in a Matrix organization structure, and demonstrated Organizational Citizenship Behavior (OCB) in past organizations.,
ACTIVELY HIRING
posted 3 weeks ago

Solar Site Engineer

Enerture Technologies Pvt Ltd
experience2 to 6 Yrs
location
Delhi
skills
  • troubleshooting
  • project management
  • record keeping
  • customer education
  • electrical systems
  • mechanical systems
  • communication skills
  • customer service
  • solar photovoltaic systems
  • site assessments
  • performance tests
  • safety standards
  • hand
  • power tools
  • technical drawings interpretation
  • problemsolving
  • energy storage systems
  • gridtied inverters
  • local building codes
Job Description
Role Overview: As a Solar Site Engineer, your primary responsibility will be to install, inspect, maintain, and repair solar photovoltaic (PV) systems. You will be conducting site assessments, troubleshooting issues, and ensuring the optimal operation and efficiency of solar systems. Collaboration with project managers, engineers, and team members is essential for timely project completion. Additionally, educating customers on system operation and maintenance best practices will be part of your role. Key Responsibilities: - Install, inspect, maintain, and repair solar PV systems, including modules, inverters, and other components. - Perform site assessments and prepare feasibility reports for solar installations. - Troubleshoot and diagnose issues with solar energy systems and provide effective solutions. - Conduct performance tests and inspections to ensure optimal system operation. - Adhere to safety standards to maintain a secure working environment. - Work closely with project managers, engineers, and team members for project completion. - Maintain accurate records of installations, maintenance activities, and system performance. - Educate customers on system operation and maintenance best practices. Qualifications Required: - High school diploma or equivalent; technical training or certification in solar energy technology preferred. - Minimum of [X] years of experience in installing, maintaining, and troubleshooting solar PV systems. - Strong understanding of electrical and mechanical systems related to solar energy. - Proficiency in using hand and power tools and testing equipment. - Ability to read and interpret technical drawings and manuals. - Excellent problem-solving skills and attention to detail. - Ability to work at heights and in various weather conditions. - Strong communication and customer service skills. - Valid driver's license and reliable transportation. - Certification in solar energy systems (e.g., NABCEP) is a plus. Additional Company Details (if present): The company offers competitive salary and performance-based incentives, comprehensive health, dental, and vision insurance, retirement savings plan with company match, opportunities for professional development and continuing education, flexible work environment and work-life balance initiatives, paid time off, and holidays.,
ACTIVELY HIRING
posted 2 weeks ago
experience2 to 6 Yrs
location
Noida, Uttar Pradesh
skills
  • PostgreSQL
  • RESTful APIs
  • Microservices
  • Database design
  • OOPS
  • Linux
  • System design
  • Agile
  • Scrum
  • Cloud Computing
  • Threading
  • Performance tuning
  • Security
  • Golang
  • Data structure Algorithm
  • NodeJS
  • My SQL
  • Web Server setupmanagement
  • Service architecture
  • Unit tests
Job Description
As a Backend Developer specializing in Golang, your main responsibility will be to enhance the performance of our web-based application. You will focus on developing server-side logic, maintaining the central database, and ensuring high responsiveness to server-side requests. Collaboration with front-end developers, designing back-end components, and integrating data storage and protection will be key aspects of your role. Your tasks will include: - Working with the team to design and build backend applications and services - Taking ownership of application features from design to post-launch support - Deploying and maintaining applications on cloud-hosted platforms - Building scalable, secure, and reliable applications - Writing high-quality, clean, maintainable code and conducting peer code reviews - Developing backend server code, APIs, and database functionality - Proposing coding standards, tools, frameworks, and automation processes - Leading technical architecture and design for application development - Working on Proof of Concepts, exploring new ideas, and influencing the product roadmap Qualifications required for this role are: - Minimum of 2+ years of experience in Golang, PostgreSQL, and backend development - Previous experience in NodeJS and My SQL is preferred - Exceptional communication, organization, and leadership skills - Proficiency in debugging and optimization - Experience in designing and developing RESTful APIs - Expertise in building scalable microservices, database design, and service architecture - Strong foundation in computer science with proficiency in OOPS, data structures, algorithms, and software design - Solid Linux skills with troubleshooting, monitoring, and log file setup/analysis experience - Knowledge of system design and setting up unit tests - Familiarity with Cloud Computing, threading, performance tuning, and security Preferred qualifications include: - Demonstrated high ownership and a positive attitude towards work - Interest in learning new tools and technologies - Proficiency in designing and coding web applications and services, ensuring high quality and performance, fixing bugs, maintaining code, and deploying apps to various environments - Bachelor's degree in Computer Science or Software Engineering preferred Please note that this job description was referenced from hirist.tech.,
ACTIVELY HIRING
posted 2 months ago
experience5 to 9 Yrs
location
Noida, Uttar Pradesh
skills
  • Power BI
  • Data modeling
  • DAX
  • SharePoint
  • Data visualization
  • Python
  • R
  • Data analysis
  • Agile development
  • Power Query
  • ETL processes
  • SQL databases
  • Azure storage
  • Microsoft Fabric
  • Rowlevel security
  • Power BI report optimization
  • Azure SQL
  • Power BI Desktop
  • Power BI Service
  • Charting technologies
Job Description
In this role as a Power Bi professional with Fabric expertise at GlobalLogic, you will be responsible for designing, developing, and maintaining Power BI reports and dashboards to derive actionable insights from assessment data. Your key responsibilities will include: - Working with data from Company's established ETL processes and other sources - Creating and optimizing data models within Power BI for accuracy and performance - Applying innovative visualization techniques to effectively communicate assessment analytics - Deploying and maintaining Power BI reports in a Microsoft Fabric environment - Collaborating with product management to prioritize deliveries and design improvements - Reviewing new analytics and reporting technologies with the product team - Understanding data context and reporting requirements with subject matter experts - Maintaining technical documentation and source control related to project design and implementation Qualifications required for this role include: - Bachelor's degree in computer science, analytics, or related field, or equivalent work experience - Minimum of 5 years of experience working with Power BI and Power Query, loading data from Azure and SharePoint - Strong DAX skills for writing complex calculations and statistical operations - Proficiency in data modeling including star and snowflake schema - Experience implementing row-level security and dynamic row-level security in Power BI reports - Familiarity with Power BI report performance optimization techniques - Practical experience with Azure SQL or similar databases for data exploration - Excellent communication skills and numerical abilities - Understanding of data protection principles - Ability to self-organize, prioritize tasks, and work collaboratively at all levels Preferred qualifications include experience in Python or R for analysis, familiarity with advanced charting technologies, knowledge of assessment data and psychometric principles, experience with Azure, and familiarity with Agile development practices. At GlobalLogic, you can expect a culture of caring, commitment to learning and development, interesting and meaningful work, balance and flexibility, and a high-trust organization. As a trusted digital engineering partner, GlobalLogic collaborates with clients to transform businesses and redefine industries through intelligent products, platforms, and services.,
ACTIVELY HIRING
posted 2 weeks ago
experience1 to 5 Yrs
location
Delhi, All India
skills
  • Cloud Computing
  • Cloud Storage
  • IAM
  • Infrastructure setup
  • Software Development
  • Scripting languages
  • Programming languages
  • Databases
  • Google Cloud Platform GCP
  • Professional Cloud Architect
  • Professional Cloud Developer
  • Professional Devops Engineer
  • Compute Engine
  • VPC
  • Cloud Functions
  • BigQuery
  • PubSub
  • Linux operating systems
  • System automation tools
  • CICD pipelines
  • DevOps practices
Job Description
As a Cloud Engineer specializing in Google Cloud Platform (GCP) at DrCode, your primary responsibility will be to design, implement, and manage cloud-based infrastructure and solutions on GCP. You will also collaborate with cross-functional teams to optimize performance and scalability. Your day-to-day activities will include configuring cloud services, maintaining system security, troubleshooting issues, and ensuring high availability. Key Responsibilities: - Design, implement, and manage cloud-based infrastructure and solutions on Google Cloud Platform (GCP). - Collaborate with cross-functional teams to optimize performance and scalability. - Configure cloud services, maintain system security, troubleshoot issues, and ensure high availability. Qualifications: - Proficiency in Cloud Computing, with hands-on experience in Google Cloud Platform (GCP), and certification in GCP. - Professional Cloud Architect/Developer/Devops Engineer certification from Google Cloud Platform (GCP) is mandatory. - 1+ years of hands-on experience with GCP services (Compute Engine, Cloud Storage, VPC, IAM, Cloud Functions, BigQuery, Pub/Sub, etc.). - Strong skills in Infrastructure setup, Linux operating systems, and system automation tools. - Experience in Software Development, including scripting and programming languages. - Knowledge of Databases, including design, optimization, and management. - Excellent problem-solving abilities, effective communication, and a collaborative approach. - Familiarity with CI/CD pipelines and DevOps practices is an advantage. As a Cloud Engineer specializing in Google Cloud Platform (GCP) at DrCode, your primary responsibility will be to design, implement, and manage cloud-based infrastructure and solutions on GCP. You will also collaborate with cross-functional teams to optimize performance and scalability. Your day-to-day activities will include configuring cloud services, maintaining system security, troubleshooting issues, and ensuring high availability. Key Responsibilities: - Design, implement, and manage cloud-based infrastructure and solutions on Google Cloud Platform (GCP). - Collaborate with cross-functional teams to optimize performance and scalability. - Configure cloud services, maintain system security, troubleshoot issues, and ensure high availability. Qualifications: - Proficiency in Cloud Computing, with hands-on experience in Google Cloud Platform (GCP), and certification in GCP. - Professional Cloud Architect/Developer/Devops Engineer certification from Google Cloud Platform (GCP) is mandatory. - 1+ years of hands-on experience with GCP services (Compute Engine, Cloud Storage, VPC, IAM, Cloud Functions, BigQuery, Pub/Sub, etc.). - Strong skills in Infrastructure setup, Linux operating systems, and system automation tools. - Experience in Software Development, including scripting and programming languages. - Knowledge of Databases, including design, optimization, and management. - Excellent problem-solving abilities, effective communication, and a collaborative approach. - Familiarity with CI/CD pipelines and DevOps practices is an advantage.
ACTIVELY HIRING
posted 2 months ago
experience10 to 14 Yrs
location
Noida, Uttar Pradesh
skills
  • Incident Management
  • High Availability
  • Troubleshooting
  • Cloud Computing
  • Leadership
  • Communication Skills
  • ITIL Methodology
  • Network Configurations
  • Storage Technologies
  • Database Concepts
Job Description
As an Incident Commander in Global Operations Center, GBU Cloud Services at Oracle, your role is crucial in providing elite service experience to GBU partners. You will lead incident management lifecycle, facilitate high-severity incident triage, engage cross-functional teams, drive incident resolution, and extract deep learnings from incidents to enhance service performance and organizational knowledge growth. Your proactive approach and ability to collaborate with teams will ensure timely service restoration and maximum customer satisfaction. **Key Responsibilities:** - Accountable for end-to-end Incident Management Lifecycle - Act as Incident Commander for high-severity incident triage - Engage cross-functional teams for timely incident resolution - Participate in on-call rotation for Incident Commander role after hours and weekends - Analyze incident data to identify service performance insights - Conduct regular incident review meetings with Cloud Operations teams - Provide oversight for Operation Engineer performance - Run post-incident debrief meetings to extract deep learnings - Lead Global Service Center service onboarding and documentation - Facilitate onboarding of new services to Cloud SOC - Review and enhance runbook documentation with cross-functional teams **Qualifications:** - 10 years of experience in a similar high availability operations environment - Bachelor's degree in computer science or related technology field - Strong verbal and written communication skills - Ability to participate in on-call rotation for Incident Commander role - Process-oriented, energetic, and analytical thinker - Extensive troubleshooting experience in operating systems and applications - Solid understanding of ITIL methodology, incident management process, and infrastructure services - Knowledge of cloud computing, server clusters, network configurations, storage technologies, and databases - Ability to quickly assimilate knowledge of new systems - Prior experience in leading incident mitigation in a cloud environment In addition, Oracle is a world leader in cloud solutions, committed to using innovative technology to address current challenges across various sectors. With a focus on inclusivity and empowerment, Oracle provides global opportunities for career growth, work-life balance, competitive benefits, and support for community engagement. If you require accessibility assistance or accommodation for a disability during the employment process, please contact Oracle at accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States.,
ACTIVELY HIRING
posted 2 weeks ago

Database Architect

Tarento Group
experience8 to 12 Yrs
location
Delhi, All India
skills
  • PostgreSQL
  • SQL
  • Query Optimization
  • Data Modeling
  • Performance Optimization
  • PLpgSQL
  • CDC Change Data Capture
  • Replication Topologies
  • Sharding
  • Partitioning
  • Connection Pooling
  • Database Dashboards
  • Cloud Database Architectures
  • AWS RDS
  • Azure Database for PostgreSQL
  • GCP Cloud SQL
Job Description
As a Database Architect, your role will involve designing, optimizing, and scaling mission-critical data systems using PostgreSQL. You will collaborate closely with development and operations teams to define data models, tune SQL queries, and architect high-performance, scalable database solutions. Your expertise in database design, optimization, and performance engineering will be crucial in recommending tuning strategies and resolving database bottlenecks across environments. Additionally, familiarity with NoSQL databases is considered a plus. Key Responsibilities: - **Database Architecture & Design:** - Architect scalable, secure, and high-performance PostgreSQL database solutions for transactional and analytical systems. - Design and maintain logical and physical schemas, ensuring proper normalization, entity relationships, and data integrity without compromising performance. - Define database standards for naming conventions, indexing, constraints, partitioning, and query optimization. - Architect and maintain database-specific materialized views, stored procedures, functions, and triggers. - Oversee schema migrations, rollbacks, and version-controlled database evolution strategies. - Implement distributed transaction management and ensure ACID compliance across services. - **Performance Optimization & Query Engineering:** - Analyze and tune complex SQL queries, joins, and stored procedures for high-throughput applications. - Define and monitor indexing strategies, query execution plans, and partitioning for large-scale datasets. - Lead query optimization, deadlock detection, and resolution strategies with development teams. - Collaborate with developers to optimize ORM configurations and reduce query overhead. - Design database caching and sharding strategies that align with application access patterns. - Review and recommend improvements for connection pooling configurations on the application side to ensure alignment with database capacity planning. - **Data Modelling, Integrity & CDC Integration:** - Design conceptual, logical, and physical data models aligned with system requirements. - Ensure data consistency and integrity using constraints, foreign keys, and triggers. - Define and implement CDC (Change Data Capture) strategies using tools like Debezium for downstream synchronization and event-driven architectures. - Collaborate with data engineering teams to define ETL and CDC-based data flows between microservices and analytics pipelines for data warehousing use cases. - **Monitoring, Dashboards & Alerting:** - Define and oversee database monitoring dashboards using tools like Prometheus, Grafana, or pgBadger. - Set up alerting rules for query latency, replication lag, deadlocks, and transaction bottlenecks. - Collaborate with operations teams to continuously improve observability, performance SLAs, and response metrics. - Perform Root Cause Analysis (RCA) for P1 production issues caused by database performance or query inefficiencies. - Conduct PostgreSQL log analysis to identify slow queries, locking patterns, and resource contention. - Recommend design-level and query-level corrective actions based on production RCA findings. - **Automation, Jobs & Crons:** - Architect database-level jobs, crons, and scheduled tasks for housekeeping, data validation, and performance checks. - Define best practices for automating materialized view refreshes, statistics updates, and data retention workflows. - Collaborate with DevOps teams to ensure cron scheduling aligns with system load and performance windows. - Introduce lightweight automation frameworks for periodic query performance audits and index efficiency checks. - **Security, Transactions & Compliance:** - Define transaction isolation levels, locking strategies, and distributed transaction coordination for high-concurrency environments. - Collaborate with security and compliance teams to implement data encryption, access control, and auditing mechanisms. - Ensure database design and data storage align with compliance frameworks like DPDP, ISO 27001, or GDPR. - Validate schema and transaction logic to prevent data anomalies or concurrency violations. - **Collaboration & Technical Leadership:** - Work closely with backend developers to architect high-performance queries, schema changes, and stored procedures. - Collaborate with DevOps and SRE teams to define HA/DR strategies, replication topologies, and capacity scaling (advisory role). - Mentor developers and junior database engineers in query optimization, data modeling, and performance diagnostics. - Participate in architecture reviews, technical design sessions, and sprint planning to guide database evolution across services. - **Documentation & Knowledge Sharing:** - Maintain comprehensive documentation for schemas, views, triggers, crons, and CDC pipelines. - Record rationale for schema design choices, indexing deci
ACTIVELY HIRING
posted 2 months ago
experience5 to 9 Yrs
location
Noida, Uttar Pradesh
skills
  • Power BI
  • Data modeling
  • DAX
  • Data modeling
  • Data validation
  • Python
  • R
  • Data visualization
  • SharePoint
  • Power Query
  • ETL processes
  • Rowlevel security
  • Azure SQL
  • Data exploration
  • Power BI Desktop
  • Power BI Service
  • SQL databases
  • Azure storage
  • Statistical operations
  • Data protection principles
  • Agile development practices
Job Description
Role Overview: You will be responsible for designing, developing, and maintaining Power BI reports and dashboards to derive actionable insights from assessment data. You will collaborate with product management and subject matter experts to ensure accurate and performant reports. Your role will involve deploying and maintaining Power BI reports in a Microsoft Fabric environment and utilizing innovative visualization techniques for clear communication of assessment analytics. Key Responsibilities: - Design, develop, and maintain Power BI reports and dashboards - Work with data from established ETL processes and other sources - Create and optimize data models within Power BI - Apply innovative visualization techniques for clear communication - Deploy and maintain Power BI reports in a Microsoft Fabric environment - Collaborate with product management and subject matter experts - Maintain technical documentation and source control Qualifications: - Bachelor's degree in computer science, analytics, or related field - Minimum 5 years of experience with Power BI and Power Query - Experience with data consumption from various sources including SQL databases, Azure storage, and SharePoint - Strong DAX skills with the ability to write complex calculations - Understanding of data modeling, including star and snowflake schema - Experience implementing row-level security in Power BI reports - Familiarity with Power BI report performance optimization techniques - Practical experience with Azure SQL for data exploration - Excellent communication skills and numerical skills - Understanding of data protection principles - Ability to self-organize, prioritize, and track tasks Company Details: GlobalLogic is a trusted digital engineering partner that collaborates with clients to transform businesses and industries through intelligent products, platforms, and services. The company prioritizes a culture of caring, continuous learning and development, interesting and meaningful work, balance, flexibility, and integrity. By joining GlobalLogic, you will be part of a high-trust organization that values integrity, truthfulness, and candor in everything it does.,
ACTIVELY HIRING
posted 3 weeks ago

Purchase Officer

ETERNA GLOBAL SOLUTIONS LLP
experience2 to 6 Yrs
location
Noida, Uttar Pradesh
skills
  • Compliance
  • Inventory Management
  • Demand Forecasting
  • Order Fulfillment
  • Negotiations
  • Bulk buying
  • Supplier Performance Evaluation
Job Description
Role Overview: You will be responsible for monitoring and improving Key Performance Indicators (KPIs) related to purchasing and inventory management. Your role will involve analyzing data and implementing strategies to optimize cost savings, purchase order cycle time, supplier performance, spend under management, compliance rate, inventory turnover ratio, stockout rate, carrying cost of inventory, order accuracy, and lead time. Key Responsibilities: - Implement purchasing strategies to achieve cost savings through negotiations and bulk buying. - Monitor and reduce purchase order cycle time to enhance the efficiency of the purchasing process. - Evaluate supplier performance based on criteria such as delivery time, quality of goods, and contract compliance. - Manage the percentage of total spend actively controlled by the purchasing team to optimize organizational expenses. - Ensure compliance with purchasing policies and procedures to govern the procurement process effectively. - Analyze and improve inventory turnover ratio to enhance inventory management efficiency. - Monitor stockout rate to improve inventory planning and demand forecasting accuracy. - Evaluate carrying cost of inventory to optimize overall profitability by managing storage, insurance, and depreciation costs. - Maintain order accuracy by ensuring a high percentage of orders are fulfilled without errors to enhance customer satisfaction and operational efficiency. - Manage lead time from placing an order to receiving stock to make informed decisions and optimize inventory levels. Qualifications Required: - Previous experience in purchasing and inventory management roles. - Strong analytical skills and proficiency in data analysis. - Excellent negotiation and communication skills. - Knowledge of inventory planning and supply chain management. - Familiarity with purchasing policies and procedures. - Ability to work effectively in a fast-paced environment and meet deadlines. Note: The company provides full-time, permanent job opportunities with a day shift schedule at an in-person work location.,
ACTIVELY HIRING
posted 2 weeks ago

Data Science Specialist

Polestar Analytics
experience3 to 7 Yrs
location
Noida, Uttar Pradesh
skills
  • Data Science
  • Machine Learning
  • Statistics
  • Clustering
  • Regression
  • NLP
  • Python
  • SQL
  • AWS
  • Azure
  • GCP
  • Data Processing
  • Hadoop
  • Spark
  • Data Visualization
  • Power BI
  • Tableau
  • Matplotlib
  • Business Understanding
  • Communication Skills
  • Unsupervised Learning
  • Kubernetes
  • Docker
  • Time Series Forecasting
  • Classification Techniques
  • Boosting Algorithms
  • Optimization Techniques
  • Recommendation Systems
  • ElasticNet
  • PySpark
  • Machine Learning Models
  • MLOps Pipelines
  • Data Storage Tools
  • D3
  • Supervised Learning
  • Ensemble Learning
  • Random Forest
  • Gradient Boosting
  • XGBoost
  • LightGBM
  • Large Language Models
  • LLMs
  • Opensource Models
  • Llama
  • GPT
  • BERT
  • Prompt Engineering
  • RAG
  • Finetuning Techniques
  • Cloud Platforms
  • MLflow
Job Description
Role Overview: You will be responsible for leveraging your 3+ years of experience in Data Science, machine learning, or related fields to tackle structured and unstructured data problems. Your expertise in Machine Learning and Statistics will be crucial as you develop, implement, and deploy machine learning models on cloud platforms such as Azure, AWS, and GCP. Your role will involve integrating machine learning models into existing systems, developing MLOps pipelines, monitoring model performance, and providing insights to stakeholders. Key Responsibilities: - Apply practical experience in time series forecasting, clustering, classification techniques, regression, boosting algorithms, optimization techniques, NLP, and recommendation systems - Utilize programming skills in Python/PySpark and SQL to develop and implement machine learning models - Deploy machine learning models on cloud platforms using AWS/Azure Machine Learning, Databricks, or other relevant services - Integrate machine learning models into systems and applications to ensure seamless functionality and data flow - Develop and maintain MLOps pipelines for automated model training, testing, deployment, and monitoring - Monitor and analyze model performance, providing reports and insights to stakeholders - Demonstrate excellent written and verbal communication skills in presenting ideas and findings to stakeholders Qualification Required: - Engineering Graduate from a reputed institute and/or Masters in Statistics, MBA - 3+ years of experience in Data Science - Strong expertise in machine learning with a deep understanding of SQL & Python - Exposure to industry-specific (CPG, Manufacturing) use cases - Strong client-facing skills - Organized and detail-oriented with excellent communication and interpersonal skills About the Company: Polestar Solutions is a data analytics and enterprise planning powerhouse that helps customers derive sophisticated insights from data in a value-oriented manner. With a presence in the United States, UK, and India, Polestar Solutions boasts a world-class team and offers growth and learning opportunities to its employees. The company has been recognized for its expertise and passion in various accolades, including being featured in the Top 50 Companies for Data Scientists and as Financial Times" High-Growth Companies across Asia-Pacific. (Note: The job description did not provide additional details about the company.),
ACTIVELY HIRING
posted 1 week ago

HPC Support Engineer

Netweb Technologies
experience2 to 6 Yrs
location
Faridabad, Haryana
skills
  • Linux distributions
  • Virtualization
  • Performance monitoring
  • Troubleshooting
  • Kubernetes
  • Load balancing
  • Firewalls
  • DNS
  • HTTP
  • LDAP
  • SMTP
  • SNMP
  • Linux administration
  • Load balancing
  • Disaster management
  • HPC Engineer
  • Linux scripting languages
  • Linux servers
  • Networking concepts
  • Terraform
  • PowerCLI
  • Vsphere CSI
  • OS hardening
  • OS patching
  • Linux certifications
  • GPO Node
  • schedulers
  • HPC concepts
  • VDM storage
Job Description
**Job Description:** As an HPC Support Engineer at Netweb Technologies, your role will involve maintaining, configuring, and ensuring the reliable operation of computer systems, servers, and network devices. Your key responsibilities will include: - Installing and upgrading computer components and software, managing servers, and implementing automation processes. - Troubleshooting hardware and software errors, documenting issues and resolutions, prioritizing problems, and assessing their impact. - Leading desktop and helpdesk support efforts to ensure timely resolution of desktop applications, workstation, and related equipment problems with minimal disruptions. - Deploying, configuring, and managing HPC (High-Performance Computing) systems and applications through cross-technology administration, scripting, and monitoring automation execution. **Qualifications:** To excel in this role, you should possess the following skills and qualifications: - Previous working experience as an HPC Engineer, with a minimum of 2-3 years of relevant experience. - A Bachelor's degree in computer science or Information Systems (IT). - Familiarity with Linux scripting languages fundamentals. - In-depth knowledge of Linux distributions such as RedHat and CentOS. - Experience with Linux servers in virtualized environments. - Good knowledge of Linux files permissions, file systems management, account management, performance monitoring, backup using Linux native tools, and Logical Volume Management. - Excellent knowledge of server hardware platforms. - A very good understanding of server, storage, and virtualization concepts. - Good knowledge of networking concepts, including subnets, gateways, masks, etc. - Excellent knowledge of performance monitoring and troubleshooting in Linux environments. - Familiarity with tools like Terraform and PowerCLI. - Basic understanding of Kubernetes and Vsphere CSI. - Knowledge of load balancing, firewalls, and other related concepts. - Solid understanding of protocols such as DNS, HTTP, LDAP, SMTP, and SNMP. - Good knowledge of network configuration and troubleshooting on Linux, OS hardening, performance tuning, and OS patching. - Linux certifications (RHCT, RHCE, and LPIC) will be preferred. **Additional Details:** Netweb Technologies is a leading Indian-origin OEM specializing in High-end Computing Solutions (HCS). Committed to the "Make in India" policy, the company develops homegrown compute and storage technologies. With a state-of-the-art supercomputing infrastructure and a widespread presence across India, you will be part of a forefront player in innovation and high-performance computing solutions. Contact: hr@netwebindia.com / upasana.srivastava@netwebindia.com,
ACTIVELY HIRING
posted 3 weeks ago
experience5 to 9 Yrs
location
Noida, Uttar Pradesh
skills
  • Data security
  • Operational efficiency
  • Data integration
  • Data Platform Engineering
  • Cloudbased data infrastructure
  • Infrastructure as Code
  • Data storage solutions
  • Data processing frameworks
  • Data orchestration tools
Job Description
Role Overview: As a Data Platform Engineer at Capgemini, you will specialize in designing, building, and maintaining cloud-based data infrastructure and platforms for data-intensive applications and services. Your role will involve developing Infrastructure as Code, managing foundational systems and tools for efficient data storage, processing, and management. You will be responsible for architecting robust and scalable cloud data infrastructure, selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, you will ensure the continuous evolution of the data platform to meet changing data needs, leverage technological advancements, and maintain high levels of data security, availability, and performance. Your tasks will include creating and managing processes and tools to enhance operational efficiency, optimizing data flow, and ensuring seamless data integration to enable developers to build, deploy, and operate data-centric applications efficiently. Key Responsibilities: - Actively participate in the professional data platform engineering community, share insights, and stay up-to-date with the latest trends and best practices. - Make substantial contributions to client delivery, particularly in the design, construction, and maintenance of cloud-based data platforms and infrastructure. - Demonstrate a sound understanding of data platform engineering principles and knowledge in areas such as cloud data storage solutions (e.g., AWS S3, Azure Data Lake), data processing frameworks (e.g., Apache Spark), and data orchestration tools. - Take ownership of independent tasks, display initiative and problem-solving skills when confronted with intricate data platform engineering challenges. - Commence leadership roles, which may encompass mentoring junior engineers, leading smaller project teams, or taking the lead on specific aspects of data platform projects. Qualifications Required: - Strong grasp of the principles and practices associated with data platform engineering, particularly within cloud environments. - Proficiency in specific technical areas related to cloud-based data infrastructure, automation, and scalability. Company Details: Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a responsible and diverse group of 340,000 team members in more than 50 countries, Capgemini has a strong over 55-year heritage. Trusted by its clients to unlock the value of technology, Capgemini delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by market-leading capabilities in AI, generative AI, cloud and data, combined with deep industry expertise and partner ecosystem.,
ACTIVELY HIRING
posted 2 months ago
experience4 to 8 Yrs
location
Faridabad, Haryana
skills
  • Java
  • Design Thinking
  • Firebase
  • Google Maps API
  • Git
  • Continuous Integration
  • Performance Tuning
  • Socket Programming
  • Social Networks
  • User Interface UI Design
  • User Experience UX Design
  • SDK Knowledge
Job Description
As an Android developer at the company, your role will involve the development and maintenance of a Social Networking Application for a wide range of Android mobile devices. You will collaborate with other engineers to create high-quality products by integrating Android applications with back-end services. Responsibilities: - Translate designs and wireframes into high-quality code - Design, build, and maintain high-performance, reusable, and reliable Java code - Ensure optimal performance, quality, and responsiveness of the application - Identify and resolve bottlenecks and bugs - Assist in maintaining code quality, organization, and automation - Implement best practices in code implementation - Willingness to work across flexible time zones if necessary Required Skills: - Strong knowledge of Android SDK, various Android versions, and handling different screen sizes - Familiarity with RESTful APIs for connecting Android applications to back-end services - Proficiency in Android UI design principles, patterns, and best practices - Experience in offline storage, threading, and performance tuning - Ability to create applications with natural user interfaces and touch interactions - Understanding of additional sensors like gyroscopes and accelerometers (preferred) - Knowledge of the open-source Android ecosystem and common task libraries - Translating business requirements into technical specifications - Familiarity with cloud message APIs and push notifications - Skill in benchmarking and optimization - Understanding of Google's Android design principles and interface guidelines - Proficient in code versioning tools like Git - Experience with continuous integration - Proficiency in Chat (one-to-one, group chat, file sharing) - Knowledge of Google Maps APIs is advantageous The company is located in Noida, UP. Skills required for this position include Java, Design Thinking, Firebase, Google Maps API, Git, Social Networks, Continuous Integration, User Interface (UI) Design, User Experience (UX) Design, SDK Knowledge, Performance Tuning, and Socket Programming.,
ACTIVELY HIRING
posted 1 day ago

Warehouse Supervisor

ATOVITT SERVICES PRIVATE LIMITED
experience9 to 14 Yrs
Salary4.0 - 9 LPA
location
Gurugram, Delhi+8

Delhi, Noida, Bangalore, Chennai, Hyderabad, Kolkata, Pune, Mumbai City, Port Blair

skills
  • osha
  • procedures
  • data
  • shipping
  • inventory
  • entry
  • control
  • safety
  • logistics
  • associates
  • functions
  • receiving
  • interpersonal
  • pallet
  • excellent
  • jack
  • warehouse
Job Description
We are looking for an experienced Warehouse Supervisor to oversee and coordinate the daily warehousing activities. You will implement production, productivity, quality and customer service standards and achieve the appropriate level of volume within time limits. Ultimately, you should be able to ensure that daily operations meet and exceed daily performance expectations and to increase the companys overall market share. Responsibilities Achieve high levels of customer satisfaction through excellence in receiving, identifying, dispatching and assuring quality of goods Measure and report the effectiveness of warehousing activities and employees performance Organize and maintain inventory and storage area Ensure shipments and inventory transactions accuracy Communicate job expectations and coach employees Determine staffing levels and assign workload Interface with customers to answer questions or solve problems Maintain items record, document necessary information and utilize reports to project warehouse status Identify areas of improvement and establish innovative or adjust existing work procedures and practices Confer and coordinate activities with other departments  
posted 2 months ago
experience4 to 8 Yrs
location
Faridabad, Haryana
skills
  • Presales
  • Server
  • Storage
  • HPC
  • IT hardware
  • Technical documentation
  • Emerging technologies
  • Communication skills
  • Presentation skills
  • Solution Architect
  • HCS solutions
  • Highperformance computing
  • Market awareness
  • Crossfunctional teams
Job Description
As a Solution Architect - Presales, your role is crucial in demonstrating advanced computing solutions to clients. You will be responsible for understanding customer requirements, designing tailored solutions using HCS offerings, and collaborating with internal teams to drive successful presales initiatives. - **Customer Engagement:** - Collaborate closely with customers to understand their technical requirements and business objectives. - Conduct technical discussions, presentations, and workshops to showcase the value of HCS solutions. - **Solution Design:** - Develop end-to-end solutions based on customer needs, leveraging the latest technologies in server, storage, HPC, and related infrastructure. - Work with internal teams to create customized solutions aligned with customer goals. - **Presales Support:** - Provide technical assistance to the sales team during the presales process, including product demonstrations, proof-of-concept implementations, and technical documentation creation. - **Market Awareness:** - Keep updated on industry trends, emerging technologies, and the competitive landscape in high-end computing solutions. - Contribute market insights to support the development of new products and services. **Qualifications:** - Bachelor's degree in Computer Science, Information Technology, or a related field. Master's degree is preferred. - Proven experience as a Solution Architect in a presales role, specializing in server, storage, HPC, and related technologies. - Strong knowledge of IT hardware, high-performance computing, and storage infrastructure. - Excellent communication and presentation skills with the ability to convey technical concepts effectively to diverse audiences. - Experience collaborating with cross-functional teams such as sales, engineering, and product management. - Relevant certifications are a plus. In addition to the above details, the company offers benefits including health insurance, paid sick time, and provident fund. This full-time position requires you to reliably commute or plan to relocate to Faridabad, Haryana. Preferred experience includes 4 years in pre-sales and customer engagement. (Note: The format of the additional company details is not provided in the job description.),
ACTIVELY HIRING
posted 2 months ago
experience2 to 6 Yrs
location
Delhi
skills
  • Java
  • Android SDK
  • Android Studio
  • RESTful APIs
  • Threading
  • Performance tuning
  • Git
  • Agile methodologies
  • Kotlin
  • UI design principles
  • Offline storage
  • Cloud message APIs
  • Push notifications
  • CICD tools
Job Description
As an Android Mobile Application Developer, your role involves developing high-quality Android applications and creating innovative, user-friendly mobile apps. You will collaborate with designers, product managers, and other developers to deliver robust and scalable applications. Key Responsibilities: - Design and build high-performance Android applications - Collaborate with cross-functional teams to define and ship new features - Identify and resolve bottlenecks and bugs - Perform unit testing and participate in code reviews - Stay updated with the latest Android development trends and technologies Qualifications Required: - A Bachelor's degree in Computer Science or a related field - Proven work experience as an Android developer - Proficiency in Java and/or Kotlin - Experience with Android SDK, Android Studio, and RESTful APIs - Knowledge of Android UI design principles, offline storage, threading, and performance tuning - Familiarity with cloud message APIs and push notifications - Experience with version control systems like Git Additionally, you should have: - Strong analytical and problem-solving abilities - Excellent communication and teamwork skills - Capability to work independently as well as in a team Preferred qualifications include: - Experience with CI/CD tools - Knowledge of other mobile frameworks or cross-platform development - Familiarity with Agile methodologies - Contributions to open-source projects or a portfolio of apps in the Google Play Store,
ACTIVELY HIRING
logo

@ 2025 Shine.com | All Right Reserved

Connect with us:
  • LinkedIn
  • Instagram
  • Facebook
  • YouTube
  • Twitter