cd-packaging-jobs-in-mahabaleshwar

78 Cd Packaging Jobs in Mahabaleshwar

Toggle to save search
posted 1 week ago

Software Engineer - Java Build Pipelines

DigiHelic Solutions Pvt. Ltd.
experience2 to 6 Yrs
location
All India
skills
  • Java
  • Maven
  • Jenkins
  • Git
  • Cloud Computing
  • OpenShift
  • Kubernetes
  • Linux
  • Agile Methodologies
  • CICD
  • GitHub Actions
  • Tekton
Job Description
Role Overview: As a Software Engineer for the Secure Flow Build Team in Bangalore, India, you will play a crucial role in developing and improving services that provide build capabilities for Java-based applications in Red Hat's product delivery pipeline named Konflux. You will collaborate with global teams to automate processes, maintain infrastructure, monitor and troubleshoot application performance, and continuously improve build processes. Key Responsibilities: - Design, develop, maintain, and enhance services offering build capabilities for Java-based applications - Automate processes by developing, maintaining, and enhancing CI/CD pipelines and automating build, test, and deployment processes - Collaborate with development, QA, and operations teams to ensure seamless integration and resolve build-related issues - Manage configuration management and build systems to ensure consistency and efficiency - Monitor the build process and application performance, and troubleshoot any deployment issues - Create and maintain documentation for build processes, including release notes and technical guides - Continuously improve build processes, implement best practices, and stay updated with industry trends and technologies - Utilize internal tools like Jira and Slack for task management and collaboration - Ensure the security and integrity of the software supply chain Qualifications Required: - 2+ years of experience developing, packaging, and releasing Java software with Maven - 2+ years of experience writing and maintaining CI/CD pipelines for Java applications (e.g., Jenkins, GitHub Actions) - Experience managing or integrating with Maven repositories (e.g., JFrog Artifactory, Sonatype Nexus) - Proficiency in using Git and a git-based source code manager (GitHub, GitLab, Atlassian BitBucket, etc.) - Solid understanding of cloud computing and technologies (e.g., OpenShift, Kubernetes, Tekton) - Experience with Tekton or Argo Workflows - Familiarity with the Linux platform - Excellent problem-solving skills and attention to detail - Experience with Agile methodologies Additional Company Details: The job may involve working with cutting-edge technologies and contributing to open source projects, making it an exciting opportunity for those interested in AI technologies.,
ACTIVELY HIRING

Top Companies are Hiring in Your City

For Multiple Roles

Jio Platforms Ltd
Jio Platforms Ltdslide-preview-Genpact
posted 3 weeks ago

Python Developer

Gainwell Technologies
experience3 to 7 Yrs
location
All India
skills
  • Python
  • Flask
  • SQL
  • Docker
  • Azure DevOps
  • React
  • FastAPI
  • REST APIs
  • CICD
  • Delta Lake
  • Unity Catalog
  • GitHub Actions
Job Description
As a Full-Stack Developer at our company, you will be responsible for designing and delivering modern, scalable applications that integrate seamlessly with the Databricks Lakehouse platform. Your key responsibilities will include: - Developing interactive, user-friendly React frontends inside Databricks Apps. - Building and maintaining Python backends using FastAPI/Flask, integrated with Databricks SQL endpoints. - Writing secure, parameterized queries against Delta Lake tables via Unity Catalog. - Designing and exposing REST APIs that serve curated data and support interactive UI workflows. - Collaborating with Databricks engineers to connect front-end workflows with data pipelines. - Implementing authentication and authorization flows (SSO, OAuth, token management). - Working with CI/CD pipelines (GitHub Actions/Azure DevOps) to deploy applications across DEV/QA/PROD. - Optimizing for performance, scalability, and cost efficiency in Databricks. The qualifications required for this role include: - Strong React expertise (Hooks, Context API, state management). - Proficiency in UI frameworks such as TailwindCSS, Material Ui, etc. - Experience with data visualization libraries like Recharts, D3, Plotly. - Knowledge of API integration and async data handling. - Strong Python development skills with FastAPI/Flask. - Experience writing APIs that query Databricks SQL endpoints. - Proficiency in SQL, parameterized queries, and performance tuning. - Experience integrating with Databricks REST APIs/SDKs for jobs, clusters, secrets, and Unity Catalog. - Understanding of Delta Lake and Unity Catalog for governance and secure data access. - Familiarity with Databricks Apps SDK, lifecycle, and deployment. - Knowledge of GitHub Actions, Azure DevOps, or similar CI/CD pipelines. - Docker knowledge for packaging backend services. - Awareness of cloud IAM, networking, and secrets management. Additionally, exposure to PySpark for advanced transformations, knowledge of healthcare or data compliance domains (HIPAA/PHI), experience with observability tools (Dynatrace, Datadog, Prometheus, ELK, Open Telemetry), and Agile experience would be considered as preferred or nice-to-have qualifications for this role.,
ACTIVELY HIRING
posted 2 months ago
experience8 to 12 Yrs
location
Karnataka
skills
  • Python
  • Jenkins
  • Git
  • Helm
  • Docker
  • Telemetry
  • CICD
  • GitLab CI
  • CircleCI
  • Feature Flagging Systems
  • HighAvailability Systems
  • AI Agents
  • ProblemSolving
Job Description
As a SaaS DevOps Engineer at Cisco, you will play a crucial role in maintaining the reliability and availability of SaaS platforms to meet customer expectations. Your responsibilities will include: - Building and maintaining CI/CD pipelines using industry-standard tools for smooth and automated deployments. - Utilizing Git tools for efficient code repository management and fostering collaboration among teams. - Planning and managing releases, assessing their impact on Continuous Deployment (CD). - Implementing and overseeing feature flagging systems to facilitate controlled rollouts and reduce production risks. - Designing and deploying high-availability (HA) systems across multi-region environments to ensure minimal downtime and maximal uptime. - Leveraging Helm, Docker, and other packaging tools to streamline deployment workflows and containerization. - Developing processes for live system upgrades and rollbacks to minimize customer impact during updates. - Monitoring running systems through telemetry data to ensure optimal performance and cost-effective operations. - Creating and managing support bundle tools for efficient issue diagnosis and troubleshooting in customer environments. - Planning and executing the release of SaaS solutions to on-premise environments for customers with hybrid or private cloud needs. - Managing the utilization of AI agents in CI/CD pipelines to enhance effectiveness and value for customers. - Collaborating with customer support teams to address operational and infrastructure-related issues effectively. Qualifications Required: - Proficiency in Python and other scripting languages. - Strong familiarity with CI/CD tools like Jenkins, GitLab CI, CircleCI, or similar platforms. - Extensive experience with Git tools and workflows for version control and collaboration. - Expertise in packaging and deployment tools, including Helm and Docker. - Knowledge of feature flagging systems and their impact on production environments. - Proven track record in designing and maintaining high-availability (HA) systems across multi-region environments. - Experience with live upgrades and rollback strategies for SaaS applications. - Strong understanding of metrics and telemetry for monitoring system performance and managing costs. - Familiarity with the use of agents in CI/CD pipelines and their role in enhancing customer support. - Exceptional problem-solving skills and a customer-centric mindset for troubleshooting and issue resolution. - Bachelor's degree or higher with 8-12 years of relevant engineering work experience. Preferred Qualifications: - Understanding of cost management strategies for SaaS applications, including insights based on telemetry. - Experience in planning and executing on-premise releases of SaaS solutions.,
ACTIVELY HIRING
question

Are these jobs relevant for you?

posted 1 week ago
experience2 to 7 Yrs
location
Haryana
skills
  • Python
  • NumPy
  • Flask
  • Azure
  • AWS
  • Docker
  • Kubernetes
  • Gitlab
  • Microservices
  • NumPy
  • Pandas
  • scikitlearn
  • FastAPI
  • CICD
  • GitHub Actions
  • API security
  • Pandas
  • scikitlearn
  • PyTorch
  • TensorFlow
  • LangGraph
  • Semantic Kernel
  • FAISS
  • Azure AI Search
  • Pinecone
Job Description
Role Overview: You are sought after to be a hands-on AI/ML engineer responsible for designing, building, and producing Generative AI solutions including RAG pipelines and multi-agent systems to automate workflows and drive operational excellence. You will collaborate closely with solution/data architects, software developers, data engineers, and domain experts to rapidly prototype and deliver scalable, enterprise-grade systems. This individual contributor role requires strong research skills, deep expertise in AI foundation models, and the ability to translate cutting-edge concepts into impactful solutions for digital grid challenges. Key Responsibilities: - End-to-End GenAI Development: Design and implement RAG pipelines, agentic workflows, and LLM integrations for tasks such as document understanding, classification, and knowledge assistance. - Multi-Agent Orchestration: Build agent-based applications for planning, tool use, and execution using frameworks like LangGraph, Semantic Kernel, and prompt orchestration tools. - AI Enterprise Architecture: Strong Experience in AI architecture (scalable, modern, and secure) design across AI/ML enterprise solutions. - Data & MLOps Foundations: Architect data pipelines and cloud solutions for training, deployment, and monitoring on Azure/AWS with Docker, Kubernetes, and CI/CD. - Rapid Prototyping to Production: Convert problem statements into prototypes, iterate with stakeholders, and harden into production-ready microservices (FastAPI) with APIs and event-driven workflows. - Evaluation & Reliability: Define rigorous evaluation metrics for LLM/ML systems (accuracy, latency, cost, safety), optimize retrieval quality, prompt strategies, and agent policies. - Security & Compliance: Implement Responsible AI guardrails, data privacy, PII handling, access controls, and auditability. - Collaboration & Enablement: Partner with data engineers, mentor junior team members, and contribute to internal documentation and demos. Qualifications Required: - Education: Bachelors/masters in computer science, Data Science, Engineering, or equivalent experience - Experience: - 7-12 years delivering AI/ML, Data Science solutions in production. - 2-3 years focused on Generative AI/LLM applications. - Technical Skills: - Programming: Strong Python (typing, packaging, testing), data stacks (NumPy, Pandas, scikit-learn), API development (FastAPI/Flask). - GenAI Expertise: - Prompt engineering, RAG design (indexing, chunking, reranking). - Embeddings and vector databases (FAISS, Azure AI Search, Pinecone). - Agent frameworks (LangGraph, Semantic Kernel) and orchestration strategies. - Model selection/fine-tuning, cost-performance optimization, safety filters. - Cloud & Data: Hands-on with Azure/AWS; experience with Azure OpenAI, Azure AI Search, Microsoft Fabric/Databricks (preferred), Snowflake or similar DWH. - MLOps: Docker, Kubernetes, CI/CD (GitHub Actions/Gitlab), model deployment/monitoring. - Architecture: Microservices, event-driven design, API security, scalability, and resilience. - Soft Skills: - Excellent team player with the ability to work collaboratively in cross-functional and multicultural teams. - Strong communication skills able to explain complex technical ideas to non-technical stakeholders. - Adaptability to changing priorities and evolving technologies. - Problem-solving mindset with creativity, curiosity, and a proactive approach. - Time management & prioritization in fast-paced, iterative environments. - A mentoring attitude toward junior colleagues and an openness to receiving feedback. - Strong sense of ownership and accountability over deliverables. - Domain Knowledge: Experience applying AI/ML to power systems, electrical grids, or related domains.,
ACTIVELY HIRING
posted 2 months ago
experience3 to 7 Yrs
location
Noida, Uttar Pradesh
skills
  • Ansible
  • Kubernetes
  • Python
  • Bash
  • Jenkins
  • Git
  • AWS
  • Azure
  • GCP
  • Terraform
  • GitLab CICD
  • Argo CD
  • Prometheus
  • Grafana
Job Description
As an Infrastructure Automation Engineer at Voereir AB, you will play a crucial role in creating and managing container images, overseeing containerization processes, and automating packaging builds through CI/CD pipelines. Your responsibilities will include: - Creating and managing container images, focusing on migrating tools like Jenkins to container-based environments. - Overseeing backup and recovery processes for infrastructure and application resources. - Implementing and maintaining CI/CD pipelines for automating the packaging of image builds. - Collaborating with cross-functional teams to identify automation opportunities and implement solutions. - Troubleshooting and resolving issues related to automated provisioning, configuration, and operational management processes. - Leveraging knowledge of hardware-based management tools such as Dell iDRAC, HP iLO for hardware-level management and control. - Ensuring infrastructure automation follows security and best practices. - Documenting workflows, best practices, and troubleshooting procedures. Qualifications required for this role: - Proficiency in Ansible for automation, configuration management, and orchestration of infrastructure. - Experience with Terraform for infrastructure provisioning and management in the data center. - Familiarity with Kubernetes at a beginner level for provisioning, scaling, and troubleshooting. - Experience with scripting languages like Python and Bash for task automation and tooling. - Understanding of networking concepts such as IP addressing, subnets, DNS, routing, and firewalls. - Experience with CI/CD tools like Jenkins, GitLab CI/CD, Argo CD. - Understanding of monitoring tools like Prometheus and Grafana. - Experience with version control systems like Git. - Knowledge of hardware-based management tools such as Dell iDRAC, HP iLO. - Knowledge of cloud platforms like AWS, Azure, GCP is a plus. - Proficient verbal and written communication skills. - Highly driven, positive attitude, team player, self-learning, self-motivating, and flexible. - Strong customer focus, networking, and relationship management skills. - Creative and innovative mindset, constantly seeking ways to enhance and optimize processes.,
ACTIVELY HIRING
posted 2 days ago
experience3 to 8 Yrs
location
Maharashtra, Pune
skills
  • Bamboo
  • Bitbucket
  • GitLab
  • Azure DevOps
  • Git
  • New Relic
  • AWS
  • Docker
  • Kubernetes
  • CICD
  • DevSecOps
  • Datadog
Job Description
As a Release and DevOps Engineer at our company, you will play a crucial role in managing the build, packaging, and deployment of software applications across different environments. Your primary focus will be on Continuous Integration and Continuous Delivery (CI/CD) to develop and maintain automated pipelines that enhance software delivery and system reliability. You will collaborate closely with development, QA, and operations teams to ensure the timely and high-quality release of software products. Key Responsibilities: - Coordinate all aspects of the software release lifecycle, including planning, scheduling, execution, and communication across staging and production environments. Conduct release readiness meetings and facilitate go/no-go decision points with stakeholders. - Automate build, test, and deployment processes for various environments (dev, staging, production). - Implement deployment strategies such as blue/green and canary to minimize risks. - Create and maintain CI/CD pipelines utilizing tools like Bamboo, Bitbucket, GitLab, Azure DevOps, or similar platforms. - Monitor and troubleshoot pipeline failures and release-related issues. - Collaborate with security and compliance teams to integrate security scanning into pipelines (DevSecOps). - Manage version control, branching, and merging strategies using Git. - Work with development, QA, operations, and product teams to plan and execute release schedules. - Ensure release plans include proper documentation, approvals, and compliance requirements. - Verify that releases adhere to best practices, security standards, compliance regulations, and rollback procedures. - Monitor application performance and health post-deployment using monitoring tools like New Relic or Datadog. - Continuously enhance release processes through automation and tooling improvements. Track and report release metrics, including success rates, issues, and rollback incidents. Required Skills & Qualifications: - Bachelor's degree in Computer Science, Information Systems, or a related field (or equivalent experience). - Minimum of 8 years of overall experience with at least 3 years of hands-on DevOps experience focusing on CI/CD. - Preferably have prior experience as a software engineer. - Proficiency in pipeline automation and scripting. - Strong knowledge of version control systems, particularly Git. - Thorough understanding of the software development lifecycle (SDLC). - Familiarity with cloud platforms like AWS and a solid grasp of containerization using Docker and Kubernetes. Please note that applicants may need to attend an onsite interview at a Wolters Kluwer office as part of the recruitment process.,
ACTIVELY HIRING
posted 2 months ago
experience2 to 6 Yrs
location
Hyderabad, Telangana
skills
  • Linux
  • Systems Programming
  • C
  • C
  • Python
  • Bash
  • OS Internals
  • Debugging
  • Git
  • Jenkins
  • PostgreSQL
  • Infrastructure Software Development
  • Hypervisors
  • Cloud Platforms
  • ProblemSolving
  • CICD Systems
  • Linux Packaging Systems
Job Description
As a Software Engineer at Nasuni, you will play a key role in developing and maintaining components of Nasuni's NAS appliance platform. You will have the opportunity to work on Linux systems programming using languages like C, C++, Python, and Bash. Your responsibilities will include assisting in writing and debugging Linux systems code, collaborating with senior engineers to resolve customer-reported issues, participating in design and code reviews, and contributing to build/release workflows. Additionally, you will have the chance to write automated tests and enhance the resilience of the infrastructure software. Key Responsibilities: - Develop and maintain components of Nasuni's NAS appliance platform - Write and debug Linux systems code - Investigate and resolve customer-reported issues with senior engineers - Participate in design reviews, code reviews, and daily standups - Write automated tests and contribute to build/release workflows Qualifications Required: - 1-4 years of experience in systems or infrastructure software development - Proficiency in Linux systems programming using C, C++, Python, and Bash - Basic understanding of OS internals such as processes, memory, filesystems, and networking - Willingness to learn about hypervisors, cloud platforms, and Linux-based appliance development - Strong problem-solving and debugging skills - Excellent communication and collaboration abilities It would be advantageous if you have exposure to cloud environments like AWS, Azure, or GCP, hands-on experience with Git, Jenkins, or CI/CD systems, knowledge of Linux packaging systems (rpm/yum), interest in open-source contributions or kernel development, and familiarity with PostgreSQL or similar databases. As Nasuni is starting an India Innovation Center in Hyderabad, you will have the opportunity to work in a hybrid work culture, spending 3 days a week in the office during core hours and 2 days working from home. Additionally, Nasuni offers competitive compensation programs, flexible time off and leave policies, comprehensive health and wellness coverage, hybrid and flexible work arrangements, employee referral and recognition programs, professional development and learning support, inclusive and collaborative team culture, modern office spaces with team events and perks, and retirement and statutory benefits as per Indian regulations. Please note that Nasuni does not accept agency resumes. Kindly refrain from forwarding resumes to job boards, Nasuni employees, or any other company location, as Nasuni is not responsible for any fees related to unsolicited resumes.,
ACTIVELY HIRING
posted 1 month ago

Graphic Designer

Karadi Path Education Company
experience3 to 8 Yrs
location
Chennai, Tamil Nadu
skills
  • Product Design
  • Marketing Collaterals
  • Logo Design
  • Typography
  • UI Design
  • UX Design
  • Layouts
Job Description
Role Overview: You will be responsible for various design tasks including designing book covers, CD labels, DVD in-lay cards, packaging, marketing collaterals such as brochures, posters, fliers, sell sheets, catalogues, logo design for subsidiaries and imprints, layouts and composition of inner pages of picture books and illustrated books for children using typography, designing for websites and web pages, UI and UX designs for mobile applications. Additionally, you will need to maintain technical knowledge by attending design workshops, reviewing professional publications, and participating in professional societies. Key Responsibilities: - Design book covers, CD labels, DVD in-lay cards, and packaging - Create marketing collaterals like brochures, posters, fliers, sell sheets, and catalogues - Develop logos for subsidiaries and imprints - Layout and compose inner pages of picture books and illustrated books for children using typography - Design for websites and web pages - Create UI and UX designs for mobile applications Qualifications Required: - 3-8 years of experience in product design and graphic design - Proficiency in design software and tools - Strong understanding of typography and layout design - Ability to stay updated with the latest design trends and technologies - Excellent communication and teamwork skills (Note: No additional details about the company were provided in the job description.),
ACTIVELY HIRING
posted 2 weeks ago
experience5 to 9 Yrs
location
Gujarat, Vadodara
skills
  • Swift
  • RESTful APIs
  • Git
  • iOS Native development
  • ObjectiveC
  • React Native
  • Kotlin Multiplatform Mobile
  • CICD pipelines
  • mobile testing frameworks
Job Description
As a Senior Software Engineer in iOS & Cross-Platform Mobile Development at Prepaid Management Services, you will be part of a team dedicated to delivering innovative prepaid solutions globally. Your expertise in iOS mobile application development using Native (Swift/Objective-C), React Native, and Kotlin Multiplatform will be crucial in building high-quality, scalable mobile applications. Key Responsibilities: - Design, develop, and maintain iOS applications using Swift/Objective-C, React Native, and Kotlin Multiplatform. - Collaborate with cross-functional teams to define, design, and ship new features. - Ensure the performance, quality, and responsiveness of applications. - Write clean, maintainable, and testable code following best practices. - Participate in code reviews and mentor junior developers. - Work closely with product managers, designers, and QA to deliver high-quality mobile experiences. - Optimize applications for maximum performance and scalability. - Stay up to date with the latest mobile development trends and technologies. Required Skills & Qualifications: - Bachelors or Masters degree in Computer Science, Engineering, or a related field. - 5+ years of professional experience in mobile application development. - Strong proficiency in iOS Native development (Swift/Objective-C). - Hands-on experience with React Native and Kotlin Multiplatform Mobile (KMM). - Solid understanding of mobile UI/UX principles and best practices. - Experience with RESTful APIs, third-party libraries, and version control tools (e.g., Git). - Familiarity with CI/CD pipelines and mobile testing frameworks. Desirable: - SDLC support tools (ALM, Confluence, Sharepoint). - Experience using npm, and CSS. - Code packaging and publishing automation. - Financial services experience (Cards/PCI). Preferred/Nice to have Skills: - Basic knowledge of Android platform architecture and development. - Experience with the Android application release process (Play Store submission, signing, versioning). - Exposure to Agile/Scrum development methodologies. - Knowledge of performance tuning and memory management in mobile apps. Personal Qualities: - Flexible and Adaptable. - Excellent Problem Solver. - Strong Communication skills both verbal and written. - Self-Starter with Initiative. - Leadership and Mentorship skills. - Detail-Oriented and Quality-Focused. - Team-Oriented. - Ethical and Responsible.,
ACTIVELY HIRING
posted 3 weeks ago
experience2 to 6 Yrs
location
All India, Ahmedabad
skills
  • JavaScript
  • Docker
  • ADFS
  • Nodejs
  • REST APIs
  • JWT
  • OAuth20
  • OpenID Connect
  • TypeScript
  • GitLab CICD
  • Grafana
  • Prometheus
  • Datadog
  • OpenAPI Specification
Job Description
As a Backend Developer at our company located in Ahmedabad, Gujarat, you will be responsible for developing and maintaining robust, scalable, and secure backend services using Node.js and TypeScript. You will design and expose RESTful APIs with proper validation, logging, and monitoring hooks. Implement authentication and authorization flows using JWT, ADFS-based OAuth2/SSO integrations, and ensure consistent and self-documented APIs using the OpenAPI specification. Additionally, you will work on database management, Docker containers, GitLab CI/CD pipelines, reverse proxy setup, performance tuning, and system observability. **Key Responsibilities:** - Develop and maintain robust, scalable, and secure backend services using Node.js and TypeScript. - Design and expose RESTful APIs with proper validation, logging, and monitoring hooks. - Implement authentication and authorization flows using JWT, ADFS-based OAuth2/SSO integrations. - Maintain consistent and self-documented APIs using the OpenAPI specification. - Create and manage Docker containers for service packaging and environment consistency. - Set up, maintain, and optimize GitLab CI/CD pipelines for automated build, test, and deployment. - Collaborate with the infrastructure team on reverse proxy setup, security certificates, and internal routing. - Participate in debugging, performance tuning, and root-cause analysis of production issues. - Ensure system health and observability using log aggregation, metrics, and alerting tools. - Create detailed technical documentation and operational runbooks. **Requirements:** - Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. - 2+ years of backend development experience using Node.js and related tools. - Solid understanding of HTTP protocols, security mechanisms, and REST principles. - Strong familiarity with OAuth2, JWT, and authentication systems like ADFS or SSO. - Practical experience with Docker, version control (Git), and CI/CD pipelines. - Able to read OpenAPI (Swagger) specs and implement endpoints accordingly. - Basic understanding of Linux-based server environments. - Comfortable working in an agile development environment. **Bonus Skills:** - Experience with infrastructure as code (e.g., Terraform). - Exposure to reverse proxy tools like Nginx, Traefik, or Apache HTTPD. - Familiarity with Active Directory Groups, permission mapping, and access controls. - Knowledge of logging and monitoring stacks such as Grafana, Prometheus, or Datadog. - Understanding of MultiValue platforms or terminal-based enterprise systems is a plus. You will be part of a dynamic team that offers a hybrid working culture, amazing perks, medical benefits, mentorship programs, certification courses, flexible work arrangements, free drinks, fridge, snacks, competitive salary, and recognitions. (Note: Additional details about the company were not provided in the job description.) As a Backend Developer at our company located in Ahmedabad, Gujarat, you will be responsible for developing and maintaining robust, scalable, and secure backend services using Node.js and TypeScript. You will design and expose RESTful APIs with proper validation, logging, and monitoring hooks. Implement authentication and authorization flows using JWT, ADFS-based OAuth2/SSO integrations, and ensure consistent and self-documented APIs using the OpenAPI specification. Additionally, you will work on database management, Docker containers, GitLab CI/CD pipelines, reverse proxy setup, performance tuning, and system observability. **Key Responsibilities:** - Develop and maintain robust, scalable, and secure backend services using Node.js and TypeScript. - Design and expose RESTful APIs with proper validation, logging, and monitoring hooks. - Implement authentication and authorization flows using JWT, ADFS-based OAuth2/SSO integrations. - Maintain consistent and self-documented APIs using the OpenAPI specification. - Create and manage Docker containers for service packaging and environment consistency. - Set up, maintain, and optimize GitLab CI/CD pipelines for automated build, test, and deployment. - Collaborate with the infrastructure team on reverse proxy setup, security certificates, and internal routing. - Participate in debugging, performance tuning, and root-cause analysis of production issues. - Ensure system health and observability using log aggregation, metrics, and alerting tools. - Create detailed technical documentation and operational runbooks. **Requirements:** - Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. - 2+ years of backend development experience using Node.js and related tools. - Solid understanding of HTTP protocols, security mechanisms, and REST principles. - Strong familiarity with OAuth2, JWT, and authentication systems like ADFS or SSO. - Practical experience with Docker, version control (Git), and CI/CD pipelines. - Able to read OpenAPI (Swagger) sp
ACTIVELY HIRING
posted 2 weeks ago

DevOps Engineer I

Foxit Software
experience1 to 5 Yrs
location
Noida, Uttar Pradesh
skills
  • Python
  • Bash
  • Java
  • PHP
  • Docker
  • Kubernetes
  • Helm
  • Jenkins
  • AWS
  • Networking
  • DNS
  • VPN
  • Terraform
  • GitHub Actions
  • GitLab CI
  • CloudWatch
  • Datadog
  • Azure Monitor
  • WAF
  • SAST
  • DAST
  • Load Balancers
  • RDS
  • Azure SQL
Job Description
As a DevOps Engineer at Foxit, your primary responsibility will be to automate and secure Foxit's multi-cloud infrastructure to enable rapid, reliable, and compliant delivery of applications worldwide. You will be involved in various aspects of infrastructure automation, CI/CD optimization, cloud security, vulnerability management, observability, and AI-enabled operational efficiency. Key Responsibilities: - Deploy and maintain AWS infrastructure using Terraform with minimal guidance - Manage containerized workloads (Docker, Kubernetes EKS) and effectively use Helm for packaging and deployment - Implement deployment strategies independently such as blue-green, canary, and rolling updates - Develop and optimize automation scripts using Python and Bash for operational efficiency - Research and implement AI-driven tools to enhance log analysis and alert management - Optimize and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI) to ensure reliable delivery - Configure and manage monitoring using Datadog, CloudWatch, Azure Monitor, and implement log aggregation solutions - Perform comprehensive log analysis to identify performance issues, bottlenecks, and proactively resolve problems - Apply OWASP Top 10 principles in infrastructure configuration and deployment processes - Implement and maintain SAST and DAST tools in CI/CD pipelines - Configure and optimize WAF rules and network security controls - Execute vulnerability management lifecycle: scanning, assessment, prioritization, and remediation coordination Qualifications: Technical Skills (Required): - Languages: Solid scripting experience in Python and Bash; basic understanding of Java or PHP - Cloud Platforms: 1+ years hands-on experience with AWS (EC2, Lambda, S3, VPC, IAM) - Infrastructure as Code: Working proficiency in Terraform; CloudFormation knowledge preferred - Containerization: 1+ years production experience with Docker and Kubernetes (EKS/AKS); Helm chart creation and management - DevOps Practices: Proven experience building and maintaining CI/CD pipelines using Jenkins, GitHub Actions, or GitLab CI - Version Control: Expert-level Git workflow management and branching strategies - Linux/Unix: Strong command-line proficiency and system administration skills Technical Skills (Nice to Have): - Monitoring & Observability: Experience with DataDog, CloudWatch, Azure Monitor, or similar platforms - Security Tools: Familiarity with SAST/DAST tools, WAF configuration, and vulnerability scanners - Networking: Solid understanding of DNS, load balancers, VPNs, and network security concepts - Database Management: Experience with RDS, Azure SQL, or container-based database deployments Professional Experience: - 1-3 years in DevOps, site reliability engineering, or infrastructure-focused development with measurable impact on system reliability - Can independently manage staging infrastructure and resolving complex technical issues - Demonstrate ability to learn new technologies rapidly and implement them effectively in production environments - Possess strong analytical and troubleshooting skills with the ability to work independently - Excellent technical communication skills and collaborative mindset Education & Certifications: - Bachelor's degree in Computer Science, Engineering, or related field OR equivalent practical experience with demonstrable expertise - Good to have: AWS Certified Solutions Architect Associate or Certified Kubernetes Administrator Join Foxit to automate, scale, and secure systems that power millions and build smarter, faster, together.,
ACTIVELY HIRING
posted 2 weeks ago
experience3 to 7 Yrs
location
All India, Hyderabad
skills
  • Data Engineering
  • Analytics
  • SQL
  • Python
  • Version Control
  • Monitoring
  • ML Engineering
  • ETLELT Tools
  • Cloud Data Platforms
  • GenAILLMs
  • APIsIntegration
  • CICD
  • S2PCLMAP Data
  • SupplierRiskMarket Data
  • LLM Orchestration Frameworks
  • Vector Stores
  • MLOpsLLMOps
  • Cloud Experience
  • Containers
  • BI Skills
  • Data Quality Tooling
Job Description
Role Overview: Join a hands-on team at Amgen building the next generation of AI-enabled Procurement. As a Senior Associate, Digital Intelligence & Enablement, you will play a crucial role in combining data engineering and Generative AI skills to transform use cases into reliable products. Your responsibilities will include developing and maintaining pipelines, implementing GenAI capabilities, shipping pilots, hardening for production, partnering with various teams, and documenting processes. Key Responsibilities: - Build the data backbone by developing and maintaining pipelines from ERP/P2P, CLM, supplier, AP, and external sources into governed, analytics/AI-ready datasets. - Implement GenAI capabilities such as retrieval-augmented generation, prompts/chains, and lightweight services/APIs for various procurement processes. - Ship pilots, measure value, and contribute to 8-12 week pilots with clear baselines, telemetry, and dashboards. - Harden the products for production by packaging code, automating CI/CD, adding evaluation and observability, and supporting incident triage. - Partner with category teams, AI/ML platform, IT Architecture, Security/Privacy, and vendors; produce clear runbooks and user guides. Qualifications Required: - 3+ years of experience in data engineering/analytics/ML engineering delivering production-grade pipelines and services. - Strong proficiency in SQL and Python; experience with ETL/ELT tools and cloud data platforms. - Practical exposure to GenAI/LLMs, APIs/integration, version control, testing, and CI/CD. - Clear communicator who collaborates effectively across business, data, and engineering teams. Additional Details: Amgen has been a pioneer in the biotech industry since 1980, focusing on four therapeutic areas - Oncology, Inflammation, General Medicine, and Rare Disease. The company's collaborative, innovative, and science-based culture offers employees the opportunity to make a lasting impact on the lives of patients while advancing their careers. Joining Amgen means transforming patient lives through innovative medicines and contributing to a global mission of serving patients living with serious illnesses. (Note: The additional details of the company have been included in the JD as per your request.) Role Overview: Join a hands-on team at Amgen building the next generation of AI-enabled Procurement. As a Senior Associate, Digital Intelligence & Enablement, you will play a crucial role in combining data engineering and Generative AI skills to transform use cases into reliable products. Your responsibilities will include developing and maintaining pipelines, implementing GenAI capabilities, shipping pilots, hardening for production, partnering with various teams, and documenting processes. Key Responsibilities: - Build the data backbone by developing and maintaining pipelines from ERP/P2P, CLM, supplier, AP, and external sources into governed, analytics/AI-ready datasets. - Implement GenAI capabilities such as retrieval-augmented generation, prompts/chains, and lightweight services/APIs for various procurement processes. - Ship pilots, measure value, and contribute to 8-12 week pilots with clear baselines, telemetry, and dashboards. - Harden the products for production by packaging code, automating CI/CD, adding evaluation and observability, and supporting incident triage. - Partner with category teams, AI/ML platform, IT Architecture, Security/Privacy, and vendors; produce clear runbooks and user guides. Qualifications Required: - 3+ years of experience in data engineering/analytics/ML engineering delivering production-grade pipelines and services. - Strong proficiency in SQL and Python; experience with ETL/ELT tools and cloud data platforms. - Practical exposure to GenAI/LLMs, APIs/integration, version control, testing, and CI/CD. - Clear communicator who collaborates effectively across business, data, and engineering teams. Additional Details: Amgen has been a pioneer in the biotech industry since 1980, focusing on four therapeutic areas - Oncology, Inflammation, General Medicine, and Rare Disease. The company's collaborative, innovative, and science-based culture offers employees the opportunity to make a lasting impact on the lives of patients while advancing their careers. Joining Amgen means transforming patient lives through innovative medicines and contributing to a global mission of serving patients living with serious illnesses. (Note: The additional details of the company have been included in the JD as per your request.)
ACTIVELY HIRING
posted 1 month ago

Software Engineer - Python (AI)

BEO Software Private Limited
experience2 to 6 Yrs
location
Kochi, Kerala
skills
  • unsupervised learning
  • optimization
  • Transformers
  • algorithms
  • data structures
  • computational complexity
  • communication skills
  • Python programming
  • ML tooling
  • PyTorch
  • TensorFlow
  • classical ML
  • DL
  • supervised learning
  • CNNs
  • RNNs
  • LSTMs
  • numerical stability
  • finetuning open models
  • PEFT approaches
  • model quantization
  • cloud ML environment
  • multimodal training pipelines
  • MLOps concepts
  • distributedefficiency libraries
  • LangGraph
  • Autogen
  • CrewAI
  • BigQuery
  • Synapse
  • AI safety
  • responsibleAI best practices
Job Description
Role Overview: You will be a hands-on AI engineer responsible for designing, prototyping, and delivering generative AI capabilities. Your role will involve practical research, building POCs, fine-tuning open models, contributing to multimodal experiments, and assisting in taking solutions towards production. You will have the opportunity to rapidly learn modern and classical ML techniques. Key Responsibilities: - Build and evaluate prototypes / POCs for generative AI features and ideas. - Fine-tune and adapt open-source LLMs and smaller generative models for targeted use cases. - Collaborate on multimodal experiments involving text, image, and audio, and implement training/evaluation pipelines. - Implement data preprocessing, augmentation, and basic feature engineering for model inputs. - Run experiments by designing evaluation metrics, performing ablations, logging results, and iterating on models. - Optimize inference and memory footprint for models through quantization, batching, and basic distillation. - Contribute to model training pipelines, scripting, and reproducible experiments. - Work with cross-functional teams (product, infra, MLOps) to prepare prototypes for deployment. - Write clear documentation, present technical results to the team, participate in code reviews, and share knowledge. - Continuously learn by reading papers, trying new tools, and bringing fresh ideas into projects. Qualification Required: Mandatory Technical Skills: - Strong Python programming skills and familiarity with ML tooling such as numpy, pandas, and scikit-learn. - Hands-on experience (2+ years) with PyTorch and/or TensorFlow for model development and fine-tuning. - Solid understanding of classical ML & DL concepts including supervised/unsupervised learning, optimization, CNNs, RNNs/LSTMs, and Transformers. - Good knowledge of algorithms & data structures, numerical stability, and computational complexity. - Practical experience in fine-tuning open models like Hugging Face Transformers, LLaMA family, BLOOM, Mistral, or similar. - Familiarity with PEFT approaches (LoRA, adapters, QLoRA basics) and simple efficiency techniques like mixed precision and model quantization. - Comfortable with running experiments, logging using tools like Weights & Biases and MLflow, and reproducing results. - Exposure to at least one cloud ML environment (GCP Vertex AI, AWS SageMaker, or Azure AI) for training or deployment tasks. - Good communication skills for documenting experiments and collaborating with product/infra teams. Highly Desirable / Preferred Skills: - Experience with multimodal training pipelines or cross-modal loss functions. - Familiarity with MLOps concepts such as model packaging, CI/CD for models, and basic monitoring. - Experience with tools like DeepSpeed, Accelerate, Ray, or similar distributed/efficiency libraries. - Knowledge of LangGraph / Autogen / CrewAI or interest in agentic systems. - Experience with BigQuery / Synapse or data warehousing for analytics. - Publications, open-source contributions, or sample projects demonstrating model work (GitHub, Colabs, demos). - Awareness of AI safety and responsible-AI best practices. (Note: Additional details about the company were not provided in the job description.),
ACTIVELY HIRING
posted 1 week ago
experience3 to 7 Yrs
location
Hyderabad, Telangana
skills
  • python
  • docker
  • kubernetes
  • aws
  • terraform
Job Description
You will be responsible for building backend components for an MLOps platform on AWS. This includes designing and developing backend services to support feature engineering, feature serving, model deployment, and inference for both batch and real-time systems. Collaboration with global cross-functional teams and participation in an on-call rotation to handle production incidents will be required. Your core responsibilities will include backend API development, cloud-based service integration, automation, and maintaining reliable, scalable infrastructure for ML workflows. - Strong Python development experience (3+ years) - Proficiency in backend frameworks such as Flask, Django, or FastAPI - Familiarity with web servers like Gunicorn and Uvicorn - Understanding of ASGI/WSGI - Knowledge of AsyncIO and concurrent programming concepts - Experience with cloud services, especially AWS, including Lambda, API Gateway, S3, and CloudWatch - Proficiency in unit and functional testing frameworks, as well as CI/CD systems like Jenkins, GitHub Actions, or GitLab CI Nice-to-have skills: - Experience with AWS SageMaker, Kubeflow, and MLflow - Exposure to model deployment and inference pipelines - Familiarity with big data tools such as Apache Spark - Knowledge of Apache Kafka for Python client apps - Experience with DevOps tools like Docker, ECS/EKS, and Terraform - Familiarity with Python packaging tools like Wheel, PEX, and Conda environments In summary, the ideal candidate for this role is a Python backend engineer with strong AWS experience, proficiency in FastAPI/Flask, AsyncIO, CI/CD pipelines, and exposure to MLOps tools like SageMaker or MLflow.,
ACTIVELY HIRING
posted 2 weeks ago
experience1 to 5 Yrs
location
Karnataka
skills
  • C
  • C
  • Java
  • Python
  • Bash
  • Linux kernel
  • ACPI
  • gitgerrit
  • Device tree
  • Linux distribution
  • CICD tools
  • Package managers
Job Description
As a Senior Software Engineer at Qualcomm India Private Limited, you will play a pivotal role in designing, developing, optimizing, and commercializing software solutions for next-generation data center platforms. You will collaborate closely with cross-functional teams to advance critical technologies such as virtualization, memory management, scheduling, and the Linux Kernel. **Key Responsibilities:** - Collaborate within the team and across teams to design, develop, and release software, tooling, and practices meeting community standards and internal and external requirements. - Bring up platform solutions across the Qualcomm chipset portfolio. - Triage software build, tooling, packaging, functional, or stability failures. - Guide and support development teams inside and outside the Linux organization, focusing on Linux userspace software functionality, integration, and maintenance. - Work with development and product teams for issue resolution. **Qualifications Required:** - Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience, OR Master's degree and 1+ year of related work experience, OR PhD in relevant field. - 2+ years of academic or work experience with Programming Languages such as C, C++, Java, Python, etc. **Preferred Qualifications:** - Master's Degree in Engineering, Information Systems, Computer Science, or a related field. - Strong background in Computer Science and software fundamentals. - Working knowledge of C, C++, and proficiency in scripting languages (Bash, Python, etc.). - Experience using git/gerrit. - Strong understanding of the Linux kernel, system services, and components of a Linux distribution. - Familiarity with package managers, CI/CD tools. - Ability to debug complex compute and data center systems. - Strong problem-solving skills. - Prior experience with Qualcomm software platforms is a plus. - Mature interpersonal skills and ability to work collaboratively within and across teams. If you are an individual with a disability and need an accommodation during the application/hiring process, Qualcomm is committed to providing accessible processes. Qualcomm expects its employees to abide by all applicable policies and procedures, including security requirements regarding protection of confidential information.,
ACTIVELY HIRING
posted 3 weeks ago
experience8 to 12 Yrs
location
All India
skills
  • Java
  • API
  • MuleSoft
  • Salesforce
  • REST
  • SOAP
  • Git
  • Jenkins
  • GitLab
  • cloud computing
  • scripting
  • automation
  • Docker
  • Kubernetes
  • security
  • Micro service architecture
  • middleware tools
  • Camunda
  • Workato
  • ERPBilling systems
  • CICD pipelines
  • CircleCI
Job Description
As a Staff Integration and DevOps Engineer at Procore's Data, Technology & Security (DTS) team, you will be responsible for ensuring efficient, scalable, and reliable integrations using Java, Microservice architecture, and middleware tools like Camunda, Workato, and MuleSoft. Your role will involve partnering with Salesforce, ERP/Billing systems teams to build integrations and implement DevOps processes for automated build, test, and deployment. **Key Responsibilities:** - Work with cross-functional teams to build integrations with Salesforce, third-party/ERP/Zoura Billing systems. - Collaborate with business analysts to gather integration requirements and translate them into technical specifications. - Participate in code reviews, troubleshoot, and resolve integration issues. - Optimize integration performance, identify and resolve bottlenecks, and monitor integration logs. - Design, develop, and maintain integrations using REST, SOAP, and other APIs. - Implement error handling and logging mechanisms for integrations. - Design and uplift CI/CD pipelines for automated processes. - Automate deployment processes, utilize version control systems for code management. - Contribute Java and Camunda expertise to influence the integration architecture. - Create and maintain technical documentation for integrations and deployment processes. - Explore Agentic solutions to enhance business processes. **Qualifications Required:** - 8+ years of experience in Application Development and DevOps. - Strong understanding of APIs (REST, SOAP, Bulk) and middleware platforms like Camunda, MuleSoft. - Proficiency in Java J2EE, SOQL/SOSL, DevOps practices, and CI/CD pipelines. - Familiarity with version control systems (Git) and tools like Jenkins, GitLab, CircleCI. - Expertise in cloud computing, scripting, automation, CI/CD, and security. - Understanding of Docker and Kubernetes for application packaging and deployment. - Strong problem-solving, analytical, communication, and collaboration skills. - Education: Engineering degree in Computer Science, Information Technology, or related field. Procore Technologies is a cloud-based construction management software company that supports clients in efficiently building various structures. Procore offers a range of benefits and perks to empower employees to grow and thrive. The company fosters a culture of ownership, innovation, and continuous learning. If you are interested in exploring opportunities at Procore, consider joining our Talent Community to stay updated on new roles. For individuals requiring alternative methods of applying due to disabilities, please contact the benefits team to discuss accommodations. As a Staff Integration and DevOps Engineer at Procore's Data, Technology & Security (DTS) team, you will be responsible for ensuring efficient, scalable, and reliable integrations using Java, Microservice architecture, and middleware tools like Camunda, Workato, and MuleSoft. Your role will involve partnering with Salesforce, ERP/Billing systems teams to build integrations and implement DevOps processes for automated build, test, and deployment. **Key Responsibilities:** - Work with cross-functional teams to build integrations with Salesforce, third-party/ERP/Zoura Billing systems. - Collaborate with business analysts to gather integration requirements and translate them into technical specifications. - Participate in code reviews, troubleshoot, and resolve integration issues. - Optimize integration performance, identify and resolve bottlenecks, and monitor integration logs. - Design, develop, and maintain integrations using REST, SOAP, and other APIs. - Implement error handling and logging mechanisms for integrations. - Design and uplift CI/CD pipelines for automated processes. - Automate deployment processes, utilize version control systems for code management. - Contribute Java and Camunda expertise to influence the integration architecture. - Create and maintain technical documentation for integrations and deployment processes. - Explore Agentic solutions to enhance business processes. **Qualifications Required:** - 8+ years of experience in Application Development and DevOps. - Strong understanding of APIs (REST, SOAP, Bulk) and middleware platforms like Camunda, MuleSoft. - Proficiency in Java J2EE, SOQL/SOSL, DevOps practices, and CI/CD pipelines. - Familiarity with version control systems (Git) and tools like Jenkins, GitLab, CircleCI. - Expertise in cloud computing, scripting, automation, CI/CD, and security. - Understanding of Docker and Kubernetes for application packaging and deployment. - Strong problem-solving, analytical, communication, and collaboration skills. - Education: Engineering degree in Computer Science, Information Technology, or related field. Procore Technologies is a cloud-based construction management software company that supports clients in efficiently building various structures. Procore offers a range of benefits and perks to empowe
ACTIVELY HIRING
posted 1 month ago
experience5 to 9 Yrs
location
Hyderabad, Telangana
skills
  • Python
  • AWS
  • RESTful APIs
  • PostgreSQL
  • Docker
  • Kubernetes
  • Unit Testing
  • Generative AI
  • Vector Databases
  • Agile Software Development
Job Description
As a Python, AWS & Generative AI Developer at CirrusLabs, you will be joining our agile product development team to work on building next-gen AI-driven applications. You will collaborate with cross-functional teams to deliver scalable APIs, integrate with modern databases, and contribute to Generative AI-powered solutions. Your responsibilities will include: - Participating in Agile ceremonies, sprint planning, and iterative development cycles. - Refining user stories into technical tasks and delivering working solutions every sprint. - Designing and developing RESTful APIs using Python (FastAPI/Flask/Django). - Implementing integration logic with external services and data pipelines. - Developing and optimizing data models using PostgreSQL and Vector Databases (e.g., Pinecone, Weaviate, FAISS). - Managing schema changes, migrations, and efficient query design. - Integrating Generative AI models (e.g., OpenAI, HuggingFace) into applications where applicable. - Working with embedding models and LLMs to support AI-driven use cases. - Packaging applications into Docker containers and deploying them to Kubernetes clusters. - Implementing CI/CD practices to support automated testing and delivery. - Creating comprehensive unit and integration tests across backend layers. - Following clean code principles and participating in peer code reviews. - (Optional) Contributing to frontend development using Angular or React, especially for micro-frontend architecture. - Working closely with frontend developers to define and support REST API consumption. - Fostering a culture of autonomy, accountability, and knowledge-sharing. - Actively engaging with the Product Owner to align features with user needs. - Participating in sprint reviews to demo implemented features and gather feedback for improvements. Qualifications required for this role include: - Strong experience in Python development with emphasis on REST APIs. - Solid understanding of PostgreSQL and at least one vector database. - Familiarity with Generative AI/LLM integration (OpenAI APIs, embeddings, etc.). - Experience deploying applications using Docker and Kubernetes. - Hands-on experience in agile software development. - Knowledge of writing unit tests and using tools like pytest or unittest. If you possess exposure to frontend frameworks like Angular or React, understanding of micro-frontend and component-based architecture, experience with cloud services such as AWS Lambda, S3, SageMaker, ECS/EKS, or familiarity with CI/CD tools like GitHub Actions, Jenkins, or GitLab CI, it would be considered as preferred skills for this position.,
ACTIVELY HIRING
posted 1 day ago
experience7 to 11 Yrs
location
Maharashtra, Pune
skills
  • Labour Management
  • Yard Management
  • Systems analysis
  • Data analysis
  • Solution architecting
  • Solution implementation
  • OO ABAP
  • CIF
  • Alerts
  • Requirement gathering
  • CIF
  • CDS
  • ODATA
  • SAP Extended Warehouse Management
  • SAP Mobile Technologies Implementations
  • SAP ERP modules such as SDMMWM
  • SAP EWM Outbound
  • Inbound
  • Internal processes
  • Cross Docking
  • Interface development RFCALEIDocWeb Services
  • ABAP objects development
  • OSS notes
  • SAP ASAP methodology
  • Program documentation
  • Design
  • testing
  • Report
  • screen
  • form design
  • Logical recommendations
  • Solution documentation
  • ITS technology
  • RF Screen
  • PPF enhancements
  • EEWB
  • Monitor Enhancement
  • Warehouse monitor
  • Solution architecting workshops
  • ABAP OOPs
  • EGF
  • DRF
  • MFS
  • EWM Label Zebra Printers
  • ABAP on HANA AMD
Job Description
As a Manager EWM Consultant at EY, you will have the opportunity to utilize your 7-11 years of SAP experience, with a minimum of 3+ years focused on SAP Extended Warehouse Management. You will play a crucial role in implementing full life cycle projects in SAP EWM, including Requirement Gathering/Business Blue Printing. Your expertise in SAP Mobile Technologies, such as SAP Console/ITS Mobile/RFID, will be essential in designing Radio Frequency Framework Solutions. Additionally, your strong knowledge in SAP ERP modules like SD/MM/WM will be beneficial in executing outbound, inbound, and internal processes in SAP EWM, including VAS, Quality Inspections, Wave Management, Physical Inventory, and Posting Changes. Your responsibilities will include creating Master Data such as Packaging Specifications and Warehouse Organizational Structures, configuring/modifying Post Processing Framework in SAP EWM, and setting up system integrations between SAP ERP and SAP EWM. You will also be involved in implementing Labour Management, Yard Management, and Cross Docking in EWM, as well as developing interfaces using RFC/ALE/IDoc/Web Services. Your role will require analysis, design, development, testing, and documentation of solutions, while adhering to an onsite-offshore delivery model and CMMI standards. In terms of qualifications, you should hold a Bachelor's or Master's degree and possess certification in SAP EWM. Proficiency in communication, team leadership, problem-solving, and analytical skills are crucial for success in this role. As part of the EY GDS-SAP EWM Technical Team, you will work on client problem-solving and provide End-to-End solutions, collaborating with high-quality teams globally. Your tasks will include ABAP objects development in SAP Modules like EWM, application of OSS notes, and adherence to SAP ASAP methodology. Furthermore, you will design, develop, and code complex programs for SAP EWM, conduct solution architecting workshops, and work effectively with project teams and users. To qualify for this role, you should have a bachelor's degree, at least 7-11 years of total work experience in SAP, and a minimum of 3 years as a Technical ABAP Consultant in SAP EWM Projects. Additionally, practical knowledge of CIF, DRF, MFS, EWM Label Zebra Printers, and ABAP on HANA concepts would be beneficial. Joining EY as a Manager EWM Consultant offers the opportunity to work in a diverse, inclusive culture and be part of a global team dedicated to building a better working world.,
ACTIVELY HIRING
posted 2 weeks ago
experience2 to 6 Yrs
location
All India, Vadodara
skills
  • Visualforce
  • SOQL
  • APIs
  • Git
  • Communication skills
  • Salesforce architecture
  • Lightning components
  • Apex development
  • Apex
  • Lightning Web Components LWC
  • Salesforce DX
  • CICD pipelines
  • AppExchange app packaging
  • Salesforce data model
  • Security model
  • Lightning framework
  • Soft Skills Analytical skills
  • Problemsolving skills
  • Collaboration abilities
Job Description
As a Salesforce Developer at KMK Consulting, you will play a crucial role in designing, developing, and deploying innovative Salesforce applications. Your responsibilities will include: - Designing, developing, and deploying Salesforce applications for AppExchange. - Working with Apex, Lightning Web Components (LWC), Visualforce, and SOQL to build scalable solutions. - Packaging and managing Salesforce applications for AppExchange publication. - Implementing and maintaining integrations with third-party applications using APIs. - Performing unit testing, integration testing, and deployment using Salesforce DX and CI/CD pipelines. - Collaborating with cross-functional teams to translate functional requirements into technical solutions within an Agile development methodology. - Ensuring code quality, performance optimization, and adherence to Salesforce best practices. - Staying updated with the latest Salesforce features, AppExchange guidelines, and development tools. To excel in this role, you should possess the following qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 2-3 years of hands-on experience in Salesforce application development. - Proficiency in Apex, LWC, Visualforce, SOQL, and Salesforce APIs. - Experience with AppExchange app packaging and security review process. - Familiarity with Salesforce DX, Git, and CI/CD tools. - Strong understanding of Salesforce data model, security model, and Lightning framework. - Knowledge of Salesforce AppExchange guidelines and compliance requirements. - Salesforce Developer Certification (Platform Developer I or II) preferred. In addition to technical skills, soft skills such as strong analytical abilities, excellent communication, collaboration, self-motivation, and detail-orientation are essential for success in this role. Join KMK Consulting, a dynamic company specializing in Pharma Sales, Marketing & Real World Evidence, as we expand our expertise into Salesforce solutions and AppExchange product development. At KMK Consulting, we bring together a range of competencies in marketing science, market research, forecasting, and sales force effectiveness to provide fully integrated solutions that support our biopharma clients' commercial success. As a Salesforce Developer at KMK Consulting, you will play a crucial role in designing, developing, and deploying innovative Salesforce applications. Your responsibilities will include: - Designing, developing, and deploying Salesforce applications for AppExchange. - Working with Apex, Lightning Web Components (LWC), Visualforce, and SOQL to build scalable solutions. - Packaging and managing Salesforce applications for AppExchange publication. - Implementing and maintaining integrations with third-party applications using APIs. - Performing unit testing, integration testing, and deployment using Salesforce DX and CI/CD pipelines. - Collaborating with cross-functional teams to translate functional requirements into technical solutions within an Agile development methodology. - Ensuring code quality, performance optimization, and adherence to Salesforce best practices. - Staying updated with the latest Salesforce features, AppExchange guidelines, and development tools. To excel in this role, you should possess the following qualifications: - Bachelor's degree in Computer Science, Information Technology, or a related field. - 2-3 years of hands-on experience in Salesforce application development. - Proficiency in Apex, LWC, Visualforce, SOQL, and Salesforce APIs. - Experience with AppExchange app packaging and security review process. - Familiarity with Salesforce DX, Git, and CI/CD tools. - Strong understanding of Salesforce data model, security model, and Lightning framework. - Knowledge of Salesforce AppExchange guidelines and compliance requirements. - Salesforce Developer Certification (Platform Developer I or II) preferred. In addition to technical skills, soft skills such as strong analytical abilities, excellent communication, collaboration, self-motivation, and detail-orientation are essential for success in this role. Join KMK Consulting, a dynamic company specializing in Pharma Sales, Marketing & Real World Evidence, as we expand our expertise into Salesforce solutions and AppExchange product development. At KMK Consulting, we bring together a range of competencies in marketing science, market research, forecasting, and sales force effectiveness to provide fully integrated solutions that support our biopharma clients' commercial success.
ACTIVELY HIRING
posted 2 months ago
experience10 to 14 Yrs
location
All India
skills
  • Kubernetes API
  • Helm templating
  • Argo CD GitOps integration
  • GoPython scripting
  • Containerd
Job Description
As a Senior Kubernetes Platform Engineer with 10+ years of infrastructure experience, your role will involve designing and implementing the Zero-Touch Build, Upgrade, and Certification pipeline for the on-premises GPU cloud platform. Your main focus will be on automating the Kubernetes layer and its dependencies using 100% GitOps workflows. You will collaborate with various teams to deliver a fully declarative, scalable, and reproducible infrastructure stack from hardware to Kubernetes and platform services. Key Responsibilities: - Architect and implement GitOps-driven Kubernetes cluster lifecycle automation using tools like kubeadm, ClusterAPI, Helm, and Argo CD. - Develop and manage declarative infrastructure components for GPU stack deployment (e.g., NVIDIA GPU Operator), container runtime configuration (Containerd), and networking layers (CNI plugins like Calico, Cilium, etc.). - Lead automation efforts to enable zero-touch upgrades and certification pipelines for Kubernetes clusters and associated workloads. - Maintain Git-backed sources of truth for all platform configurations and integrations. - Standardize deployment practices across multi-cluster GPU environments, ensuring scalability, repeatability, and compliance. - Drive observability, testing, and validation as part of the continuous delivery process (e.g., cluster conformance, GPU health checks). - Collaborate with infrastructure, security, and SRE teams to ensure seamless handoffs between lower layers (hardware/OS) and the Kubernetes platform. - Mentor junior engineers and contribute to the platform automation roadmap. Qualifications Required: - 10+ years of hands-on experience in infrastructure engineering, with a strong focus on Kubernetes-based environments. - Primary key skills required are Kubernetes API, Helm templating, Argo CD GitOps integration, Go/Python scripting, Containerd. - Deep knowledge and hands-on experience with Kubernetes cluster management (kubeadm, ClusterAPI), Argo CD for GitOps-based delivery, Helm for application and cluster add-on packaging, and Containerd as a container runtime and its integration in GPU workloads. - Experience deploying and operating the NVIDIA GPU Operator or equivalent in production environments. - Solid understanding of CNI plugin ecosystems, network policies, and multi-tenant networking in Kubernetes. - Strong GitOps mindset with experience managing infrastructure as code through Git-based workflows. - Experience building Kubernetes clusters in on-prem environments (vs. managed cloud services). - Proven ability to scale and manage multi-cluster, GPU-accelerated workloads with high availability and security. - Solid scripting and automation skills (Bash, Python, or Go). - Familiarity with Linux internals, systemd, and OS-level tuning for container workloads. Additionally, bonus qualifications include experience with custom controllers, operators, or Kubernetes API extensions, contributions to Kubernetes or CNCF projects, and exposure to service meshes, ingress controllers, or workload identity providers.,
ACTIVELY HIRING
logo

@ 2025 Shine.com | All Right Reserved

Connect with us:
  • LinkedIn
  • Instagram
  • Facebook
  • YouTube
  • Twitter