anomaly-detection-jobs-in-navi-mumbai, Navi Mumbai

3 Anomaly Detection Jobs nearby Navi Mumbai

Toggle to save search
posted 2 months ago
experience4 to 8 Yrs
location
Navi Mumbai, Maharashtra
skills
  • Python
  • R
  • SQL
  • statistics
  • probability
  • hypothesis testing
  • experimental design
  • data wrangling
  • Tableau
  • Power BI
  • natural language processing
  • computer vision
  • deep learning
  • anomaly detection
  • Spark
  • Hadoop
  • Kafka
  • fintech
  • healthcare
  • leadership
  • scikitlearn
  • TensorFlow
  • PyTorch
  • XGBoost
  • ETL pipelines
  • cloud platforms
  • Sagemaker
  • BigQuery
  • Databricks
  • MLOps practices
  • visualisation tools
  • Plotly
  • time series forecasting
  • big data technologies
  • ecommerce
  • mentorship
Job Description
As a Data Scientist at RBHU ETG Pvt. Ltd., you will play a crucial role in solving complex business problems through advanced analytics and machine learning techniques. Your responsibilities will include: - Analyzing large and diverse datasets to derive actionable insights for strategic decision-making. - Developing and implementing machine learning models such as classification, regression, and clustering with a focus on scalability and performance. - Conducting feature engineering, data cleaning, preprocessing, and exploratory data analysis. - Collaborating with cross-functional teams to understand business requirements and translate them into analytical and machine learning solutions. - Building and maintaining data pipelines, ensuring data quality and consistency across different sources. - Monitoring model performance, conducting retraining as necessary, performing A/B testing, and managing model versioning. - Communicating findings effectively through dashboards, visualizations, and reports for both technical and non-technical stakeholders. - Mentoring and guiding junior data scientists/analysts, reviewing code, and advocating for best practices. - Keeping abreast of the latest trends, tools, and research in AI/ML to evaluate their relevance in our environment. Qualifications we are looking for in a candidate include: - 4-6 years of experience in roles such as Data Scientist, Machine Learning Engineer, or similar. - Proficiency in Python (preferred) and/or R, along with experience in SQL. - Hands-on expertise in ML frameworks like scikit-learn, TensorFlow, PyTorch, XGBoost, etc. - Strong knowledge of statistics, probability, hypothesis testing, and experimental design. - Skills in data engineering including ETL pipelines, data wrangling, and handling large-scale datasets. - Familiarity with cloud platforms and services such as AWS, GCP, Azure, Sagemaker, BigQuery, Databricks, etc. - Experience in deploying and maintaining models in production following MLOps practices. - Strong analytical, problem-solving, and communication skills, along with familiarity with visualization tools like Tableau, Power BI, Plotly, etc. - Ability to thrive in a fast-paced, agile environment, being self-motivated and detail-oriented. Additionally, the following qualifications are considered nice to have: - Experience in natural language processing, computer vision, or deep learning. - Knowledge of time series forecasting, anomaly detection, and big data technologies like Spark, Hadoop, Kafka, etc. - Sector-specific analytics experience in fintech, e-commerce, healthcare, or other industries. - Previous mentorship or small-team leadership experience. At RBHU ETG Pvt. Ltd., you can expect: - A collaborative and learning-oriented work environment where your ideas are valued. - Exposure to challenging and high-impact projects in Mumbai and beyond. - Opportunities to work across the full stack of data science and deployment. - Professional growth and mentorship opportunities. - Competitive salary and benefits. (Note: Additional details about the company were not provided in the job description),
ACTIVELY HIRING

Top Companies are Hiring in Your City

For Multiple Roles

Jio Platforms Ltd
Jio Platforms Ltdslide-preview-Genpact
posted 2 weeks ago

Lead Advanced Analytics

Reliance Industries Limited
experience5 to 9 Yrs
location
Navi Mumbai, All India
skills
  • Python
  • SQL
  • ML frameworks scikitlearn
  • TensorFlow
  • PyTorch
  • Bigdata technologies Spark
  • Databricks
Job Description
As a candidate for the role of spearheading the development and deployment of advanced analytics and data-driven solutions across the CBG value chain, your responsibilities will include: - Defining and owning the advanced analytics roadmap, aligning data initiatives with CBG business objectives such as yield maximization, cost reduction, and quality improvement. - Identifying high-impact use cases across feedstock selection, digester performance, purification, bottling, logistics, and sales forecasting. You will also be leading the design, development, and validation of statistical and machine-learning models including predictive maintenance, anomaly detection, yield forecasting, and optimization. Additionally, you will oversee end-to-end analytics workflows such as data ingestion/cleansing, feature engineering, model training, evaluation, and deployment. In this role, you will be required to evaluate, select, and manage analytics platforms and toolchains. You will drive the adoption of automated ML pipelines, MLOps best practices, and scalable data architectures. Collaboration with plant operations, supply chain, and commercial teams will be essential to translate analytical insights into actionable dashboards and reports using tools like Power BI, Tableau, or Grafana. Presenting findings and recommendations to senior leadership in clear business narratives will also be part of your responsibilities. You will mentor and lead a small team of data scientists, analysts, and engineers while fostering a culture of experimentation, continuous learning, and cross-functional collaboration. Establishing data governance standards, model validation protocols, and ensuring compliance with data security, privacy, and regulatory requirements will also fall under your purview. Qualifications required for this position include a B.E. / B.Tech degree and strong proficiency in Python, SQL, ML frameworks (scikit-learn, TensorFlow, PyTorch), and big-data technologies (Spark, Databricks, etc.). Hands-on experience with predictive maintenance, process optimization, time-series forecasting, and anomaly detection is also a key skill set expected from you. As a candidate for the role of spearheading the development and deployment of advanced analytics and data-driven solutions across the CBG value chain, your responsibilities will include: - Defining and owning the advanced analytics roadmap, aligning data initiatives with CBG business objectives such as yield maximization, cost reduction, and quality improvement. - Identifying high-impact use cases across feedstock selection, digester performance, purification, bottling, logistics, and sales forecasting. You will also be leading the design, development, and validation of statistical and machine-learning models including predictive maintenance, anomaly detection, yield forecasting, and optimization. Additionally, you will oversee end-to-end analytics workflows such as data ingestion/cleansing, feature engineering, model training, evaluation, and deployment. In this role, you will be required to evaluate, select, and manage analytics platforms and toolchains. You will drive the adoption of automated ML pipelines, MLOps best practices, and scalable data architectures. Collaboration with plant operations, supply chain, and commercial teams will be essential to translate analytical insights into actionable dashboards and reports using tools like Power BI, Tableau, or Grafana. Presenting findings and recommendations to senior leadership in clear business narratives will also be part of your responsibilities. You will mentor and lead a small team of data scientists, analysts, and engineers while fostering a culture of experimentation, continuous learning, and cross-functional collaboration. Establishing data governance standards, model validation protocols, and ensuring compliance with data security, privacy, and regulatory requirements will also fall under your purview. Qualifications required for this position include a B.E. / B.Tech degree and strong proficiency in Python, SQL, ML frameworks (scikit-learn, TensorFlow, PyTorch), and big-data technologies (Spark, Databricks, etc.). Hands-on experience with predictive maintenance, process optimization, time-series forecasting, and anomaly detection is also a key skill set expected from you.
ACTIVELY HIRING
posted 3 weeks ago
experience5 to 9 Yrs
location
Navi Mumbai, Maharashtra
skills
  • Python
  • SQL
  • ML frameworks scikitlearn
  • TensorFlow
  • PyTorch
  • Bigdata technologies Spark
  • Databricks
Job Description
As the Lead for Advanced Analytics in the CBG sector, you will be responsible for spearheading the development and deployment of data-driven solutions to optimize processes, provide predictive insights, and drive innovation across various operations. Your key responsibilities will include: - Defining and owning the advanced analytics roadmap, aligning data initiatives with CBG business objectives such as yield maximization, cost reduction, and quality improvement. - Identifying high-impact use cases in areas like feedstock selection, digester performance, purification, bottling, logistics, and sales forecasting. You will also lead the design, development, and validation of statistical and machine-learning models for predictive maintenance, anomaly detection, yield forecasting, and optimization. This involves overseeing end-to-end analytics workflows including data ingestion, cleansing, feature engineering, model training, evaluation, and deployment. Additionally, you will be tasked with evaluating, selecting, and managing analytics platforms and toolchains, as well as driving the adoption of automated ML pipelines, MLOps best practices, and scalable data architectures. Collaborating with plant operations, supply chain, and commercial teams, you will translate analytical insights into actionable dashboards and reports using tools like Power BI, Tableau, or Grafana. Presenting findings and recommendations to senior leadership in clear business narratives will be a key aspect of your role. Furthermore, you will mentor and lead a small team of data scientists, analysts, and engineers, fostering a culture of experimentation, continuous learning, and cross-functional collaboration. Establishing data governance standards, model validation protocols, and ensuring compliance with data security, privacy, and regulatory requirements will also fall under your purview. Qualifications required for this role include: - B.E. / B.Tech degree - Strong proficiency in Python, SQL, ML frameworks (scikit-learn, TensorFlow, PyTorch), and big-data technologies (Spark, Databricks, etc.) - Hands-on experience with predictive maintenance, process optimization, time-series forecasting, and anomaly detection.,
ACTIVELY HIRING
question

Are these jobs relevant for you?

posted 2 months ago

Security Analyst - Threat Hunting

SHI Solutions India Pvt. Ltd.
experience3 to 7 Yrs
location
Maharashtra
skills
  • Threat Hunting
  • Install
  • Configure
  • Manage FleetDM
  • Manage OSQuery
  • Create custom queries
  • Deploy alerts
  • Deploy rules
  • Analyze endpoint telemetry data
  • Proactively hunt for threats
  • Investigate malware
  • Develop detection rules
  • Operating systems knowledge
  • Networking knowledge
  • Query language knowledge
Job Description
Role Overview: You will be responsible for installing, configuring, and managing FleetDM and OSQuery across the bank's critical endpoints in Mumbai, ensuring continuous monitoring of core banking systems and financial infrastructure. Additionally, you will create and deploy custom queries, alerts, and rules to detect unauthorized activities, internal threats, and system anomalies. Your role will involve leveraging FleetDM and OSQuery to gather and analyze endpoint telemetry data for signs of malicious activity targeting banking applications and infrastructure. Proactively hunting for advanced persistent threats (APTs), malware, and other security risks across Windows and Linux environments will be a key part of your responsibilities. You will also utilize data from FleetDM and OSQuery to identify potential risks, detect fraudulent activities across financial systems and customer-facing services, investigate malware impact on financial services, and develop detection rules to mitigate future incidents. Key Responsibilities: - Install, configure, and manage FleetDM and OSQuery across critical endpoints - Create and deploy custom queries, alerts, and rules for detecting unauthorized activities and threats - Leverage FleetDM and OSQuery to analyze endpoint telemetry data for malicious activity - Proactively hunt for APTs, malware, and security risks across Windows and Linux environments - Identify risks, detect fraudulent activities, investigate malware impact, and develop detection rules Qualifications Required: - Minimum 3 years of relevant work experience - Knowledge of operating systems, networking, and query languages (Note: Additional details of the company were not provided in the job description),
ACTIVELY HIRING
posted 2 weeks ago
experience4 to 8 Yrs
location
Maharashtra
skills
  • SIEM
  • TCPIP
  • IP Routing
  • VPN
  • SSL
  • Security Tools
  • IDSIPS
  • Antivirus
  • Proxy
  • Url Filtering
  • DLP
  • Packet Analysis
  • Exploits
  • Vulnerabilities
  • Incident Response
  • Security Events Analysis
  • RootCause Analysis
  • Cybersecurity
  • Encryption Methodology
  • Web Application Firewalls
  • Vulnerability Scanner
Job Description
You will be joining a high-performance team in Mumbai as a Security Analyst. Your role will involve monitoring security events and alerts, evaluating their severity, and providing incident response. This is a 24*7 role with the option to work fully from the office. **Responsibilities:** - Utilize SIEM technologies and native tools for 24x7 monitoring of security events. - Manage inbound requests through the ticketing system and telephone calls, providing security notifications via incident tickets, emails, and telephone calls. - Analyze logs from network devices like firewalls, content filtering, and intrusion detection capabilities. - Monitor security events using SIEM, integrate results to proactively protect the enterprise. - Analyze security events, identify threats, anomalies, and infections, document findings, and provide recommendations within the incident management system. - Perform cybersecurity root-cause analysis for tickets that do not meet Acceptable Quality Levels. **Requirements:** - Bachelor's degree in Computer Science, Systems Engineering, Cybersecurity, Information Technology, or related field. - Minimum 4 years of monitoring experience in a Cyber Security Operations Center. - Technical expertise in troubleshooting Microsoft products and Operating systems, with knowledge of MAC OS & Linux. - Understanding of basic network services, TCP/IP, IP Routing, attacks, exploits, and vulnerabilities. - Experience with VPN, SSL, encryption technologies, and security tools like SIEM, IDS/IPS, Antivirus, etc. - Ability to manage confidential information. **Desired Certifications:** - CompTIA Security+ - Certified Ethical Hacker (CEH) - GIAC Certified Incident Handler (GCIH) - Certified SOC Analyst (CSA) - Microsoft Certified: Security Operations Analyst Associate Kroll is a global valuation and corporate finance advisor that provides expertise in various areas including complex valuation, disputes, M&A, restructuring, and compliance consulting. The organization values diverse perspectives and encourages a collaborative work environment that empowers individuals to excel. To be considered for a position, you must formally apply via careers.kroll.com.,
ACTIVELY HIRING
posted 2 months ago
experience1 to 8 Yrs
location
Pune, Maharashtra
skills
  • AML
  • Document Review
  • Transaction Monitoring
Job Description
As an AML - Transaction Monitoring Associate/Senior Associate/Team Lead in Chennai, your role involves: - Reviewing assigned alerts using the AML alert management system. - Documenting findings from the review process. - Reviewing manual referrals from various areas of the bank. - Analyzing Low, Medium, and High Risk accounts. - Conducting enhanced due diligence research on individuals, institutions, and trusts using tools like LexisNexis. - Interacting with Bank management regarding suspicious transactions. - Preparing and reviewing Loan accounts, RDC, Wire Transfers, and Monetary Instrument reports. - Utilizing transactional and customer records to identify suspicious activities. - Performing detailed analyses to detect patterns, trends, anomalies, and schemes in transactions. - Maintaining strong investigative skills and extensive banking and Compliance knowledge. - Identifying significant cases, red flags, and patterns associated with money laundering. Qualifications required for this role: - Minimum 1-8 years of experience in AML - Transaction Monitoring. - AML level 1 detection experience. - Bachelor's Degree. - Flexibility for night shifts. Key Skills: - AML - Document Review - Transaction Monitoring,
ACTIVELY HIRING
posted 2 months ago
experience3 to 7 Yrs
location
Pune, Maharashtra
skills
  • Splunk
  • Duo
  • ServiceNow
  • JIRA
  • Cisco AMP
  • CASB
  • CrowdStrike
  • ExtraHop
Job Description
You are a skilled SOC Analyst Level 2 joining the Security Operations Center team in Pune. Your primary responsibility is to protect the organization's digital assets by monitoring, analyzing, and responding to security incidents. You will utilize tools like Splunk to defend information assets, handle complex security incidents, conduct detailed investigations, and mentor L1 analysts. Your role includes using advanced security tools to detect, analyze, and respond to sophisticated cyber threats, contributing to improving SOC processes. - Investigate and resolve escalated security incidents - Perform in-depth root cause analysis - Conduct proactive threat hunting activities - Utilize advanced endpoint protection and threat analysis tools like Cisco AMP and CrowdStrike - Monitor and analyze network traffic to detect anomalies and potential intrusions - Perform detailed log analysis and event correlation using Splunk - Recommend and configure SIEM rules and alerts to enhance detection capabilities - Monitor and secure cloud services and applications using CASB solutions - Collaborate with cross-functional teams to coordinate incident response efforts - Document findings and actions - Mentor L1 analysts - Stay updated on emerging cybersecurity threats, trends, and technologies You should bring proficiency with tools like Cisco AMP, Splunk, Duo, CASB, CrowdStrike, ExtraHop, ServiceNow, and JIRA, strong knowledge of network and endpoint security principles, and hands-on experience with incident response, threat hunting, and log analysis. Persistent Ltd. offers a competitive salary and benefits package, a culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications. You will have the opportunity to work with cutting-edge technologies, engage in employee initiatives, receive annual health check-ups, and have insurance coverage for yourself, spouse, two children, and parents. The company fosters a diverse and inclusive environment, providing hybrid work options, flexible hours, and accessible facilities to support employees with disabilities. If you have specific requirements, please inform during the application process or at any time during your employment. Persistent Ltd. is committed to creating an inclusive environment where all employees can thrive, accelerate growth both professionally and personally, impact the world positively, enjoy collaborative innovation, and unlock global opportunities to work and learn with the industry's best. Join Persistent, an Equal Opportunity Employer that prohibits discrimination and harassment.,
ACTIVELY HIRING
posted 3 weeks ago
experience4 to 8 Yrs
location
Pune, Maharashtra
skills
  • regression
  • classification
  • anomaly detection
  • Python
  • timeseries modeling
  • scikitlearn
  • TensorFlow
  • PyTorch
  • ML Ops
  • Azure ML
  • Databricks
  • manufacturing KPIs
Job Description
As an Automation-Tools/Platform-Automated Artificial Intelligence Tools Specialist at NTT DATA in Bangalore, Karnataka, India, you will play a crucial role in leveraging machine learning models to predict asset health, process anomalies, and optimize operations. Your responsibilities will include working on sensor data pre-processing, model training, and inference deployment, as well as collaborating with simulation engineers for closed-loop systems. Key Responsibilities: - Build machine learning models to predict asset health, process anomalies, and optimize operations. - Work on sensor data pre-processing, model training, and inference deployment. - Collaborate with simulation engineers for closed-loop systems. Skills Required: - Experience in time-series modeling, regression/classification, and anomaly detection. - Familiarity with Python, scikit-learn, TensorFlow/PyTorch. - Experience with ML Ops on Azure ML, Databricks, or similar. - Understanding of manufacturing KPIs (e.g., OEE, MTBF, cycle time). About NTT DATA: NTT DATA is a $30 billion trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. Committed to helping clients innovate, optimize, and transform for long-term success, NTT DATA has diverse experts in more than 50 countries and a robust partner ecosystem. Their services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure in the world and is part of the NTT Group, which invests heavily in R&D to support organizations and society in confidently transitioning into the digital future. Visit them at us.nttdata.com.,
ACTIVELY HIRING
posted 2 months ago
experience4 to 8 Yrs
location
Maharashtra
skills
  • SQL
  • data visualization
  • anomaly detection
  • Python
  • Linux
  • Big Data
  • Hadoop
  • HBase
  • Spark
  • data structures
  • algorithms
  • NOSQL database
  • deep learning frameworks
  • cloud technology
  • machine learning libraries
  • problemsolving
Job Description
Role Overview: You will be responsible for developing models to achieve business objectives and tracking their progress with metrics. Your role will involve developing scalable infrastructure for automating training and deployment of ML models, managing resources efficiently to meet deadlines, and creating data visualizations for better data comprehension. Additionally, you will brainstorm and design various POCs using ML/DL/NLP solutions, work with and mentor fellow Data and Software Engineers, and communicate effectively to understand product challenges. Key Responsibilities: - Understand business objectives and develop models to achieve them - Develop scalable infrastructure for automating training and deployment of ML models - Manage resources efficiently to meet deadlines - Create data visualizations for better data comprehension - Brainstorm and design various POCs using ML/DL/NLP solutions - Work with and mentor fellow Data and Software Engineers - Communicate effectively to understand product challenges - Build core Artificial Intelligence and AI services such as Decision Support, Vision, Speech, Text, NLP, NLU, and others - Experiment with ML models in Python using machine learning libraries, Big Data, Hadoop, HBase, Spark, etc. Qualifications Required: - Proficiency in SQL and any NO-SQL database like MongoDB, ElasticSearch - Experience with data visualization tools like Kibana - Expertise in visualizing big datasets and monitoring data - Familiarity with implementing anomaly detection - Proficiency in deep learning frameworks - Proficiency in Python and basic libraries for machine learning - Familiarity with Linux - Ability to select hardware to run an ML model with the required latency - Knowledge of cloud technology like AWS, Google Cloud, or similar technology - Knowledge of Python using machine learning libraries (Pytorch, Tensorflow), Big Data, Hadoop, HBase, Spark, etc. - Deep understanding of data structures, algorithms, and excellent problem-solving skills,
ACTIVELY HIRING
posted 3 days ago

Business Intelligence

Channel Factory
experience5 to 9 Yrs
location
Maharashtra
skills
  • Tableau
  • SQL
  • Python
  • Business intelligence
  • powerBi
Job Description
As a Business Intelligence Analyst, you will be responsible for developing predictive models, analyzing complex datasets, and delivering actionable insights to support strategic decision-making. Your role will involve focusing on data validation, mining, and visualization to drive business growth and enhance reporting accuracy across the organization. Key Responsibilities: - Validate and review data from various business functions to ensure accuracy and consistency. - Perform data mining, profiling, and anomaly detection across multiple datasets. - Build predictive models and generate insights aligned with business needs. - Develop BI dashboards, reports, and visualizations for stakeholders and management. - Analyze customer behavior and key business metrics to support strategic initiatives. - Establish data collection, governance, and analysis procedures. Qualifications: - Bachelors degree in Data Science, Analytics, Computer Science, or related field. - 4-6 years of experience in Business Intelligence, Data Analytics, or Predictive Modeling. - Strong ability to communicate complex insights in a simple, clear way. Must have skills: - PowerBi - Tableau - SQL - Python - Business intelligence,
ACTIVELY HIRING
posted 7 days ago
experience5 to 9 Yrs
location
Pune, Maharashtra
skills
  • Snowflake
  • Python
  • dbt
  • GitLab
  • Jenkins
  • Kafka
  • Power BI
  • Spring
  • SQL
  • Snowpark
  • PySpark
  • Spark Structured Streaming
  • Java JDK
  • Springboot
Job Description
As a Data Engineer II at Ethoca, a Mastercard Company, located in Pune, India, you will play a crucial role in driving data enablement and exploring big data solutions within the technology landscape. Your expertise in cutting-edge software development, full stack development, cloud technologies, and data lake experience will be key in working with massive data volumes. Here's an overview of your role: Key Responsibilities: - Design, develop, and optimize batch and real-time data pipelines using Snowflake, Snowpark, Python, and PySpark. - Build data transformation workflows using dbt, focusing on Test-Driven Development (TDD) and modular design. - Implement and manage CI/CD pipelines using GitLab and Jenkins for automated testing, deployment, and monitoring of data workflows. - Deploy and manage Snowflake objects using Schema Change to ensure controlled, auditable, and repeatable releases across environments. - Administer and optimize the Snowflake platform, including performance tuning, access management, cost control, and platform scalability. - Drive DataOps practices by integrating testing, monitoring, versioning, and collaboration throughout the data pipeline lifecycle. - Develop scalable and reusable data models for business analytics and dashboarding in Power BI. - Support real-time data streaming pipelines using technologies like Kafka and Spark Structured Streaming. - Establish data observability practices to monitor data quality, freshness, lineage, and anomaly detection. - Plan and execute deployments, migrations, and upgrades across data platforms and pipelines with minimal service impacts. - Collaborate with stakeholders to understand data requirements and deliver reliable data solutions. - Document pipeline architecture, processes, and standards to ensure consistency and transparency within the team. - Utilize problem-solving and analytical skills to troubleshoot complex data and system issues. - Communicate effectively with technical and non-technical teams. Required Qualifications: - Bachelor's degree in Computer Science/Engineering or related field. - 4-6 years of professional experience in software development/data engineering. - Deep hands-on experience with Snowflake/Data Bricks, Snowpark, Python, and PySpark. - Familiarity with Schema Change, dbt, Java JDK 8, Spring & Springboot framework. - Proficiency in CI/CD tooling (GitLab, Jenkins), cloud-based databases (AWS, Azure, GCP), Power BI, and SQL development. - Experience in real-time data processing, building data observability tools, and executing deployments with minimal disruption. - Strong communication, collaboration, and analytical skills. By joining Ethoca, a Mastercard Company, you will be part of a team committed to building a sustainable world and unlocking priceless possibilities for all. Your contributions as a Data Engineer II will be instrumental in shaping the future of the high growth fintech marketplace.,
ACTIVELY HIRING
posted 2 months ago

SAS Visual Investigator (VI) Developer

Lericon Informatics Pvt. Ltd.
experience3 to 7 Yrs
location
Maharashtra
skills
  • fraud detection
  • risk management
  • compliance
  • anomaly detection
  • data analysis
  • dashboards
  • data quality
  • integration
  • SAS Visual Investigator
  • visualizations
Job Description
Role Overview: You will utilize your expertise in SAS Visual Investigator (SAS VI) to enhance fraud detection and risk management initiatives. Your responsibilities will include analyzing complex data, identifying anomalies, investigating suspicious activities, and developing visualizations and dashboards to present insights to stakeholders. Your contributions will play a crucial role in identifying patterns, generating alerts for potential fraud risks, and continuously improving fraud prevention strategies. Key Responsibilities: - Design and implement solutions in SAS VI for fraud detection, risk management, compliance, and anomaly detection. - Investigate suspicious activities by analyzing large datasets to identify patterns and potential fraud risks. - Develop and maintain dashboards and visualizations to effectively communicate insights and alerts to stakeholders. - Utilize SAS VI to detect and alert to fraudulent activities, ensuring compliance and timely intervention. - Collaborate with teams to seamlessly integrate SAS VI with other systems for comprehensive fraud management. - Continuously assess and enhance fraud detection processes and system performance. - Ensure data quality and integrity through rigorous validation and troubleshooting in SAS VI. - Stay informed about the latest trends and best practices in fraud detection, compliance, and anomaly detection. Qualifications: - Proficiency in using SAS VI for fraud detection, risk management, and anomaly detection. - Experience in investigating suspicious activities, identifying patterns, and generating alerts. - Ability to develop dashboards and visualizations to effectively present findings. - Knowledge of integrating SAS VI with other systems for fraud management. - Demonstrated capability in ensuring data quality and improving fraud detection processes. - Bachelor's degree in Computer Science, Data Analytics, Statistics, or a related field.,
ACTIVELY HIRING
posted 2 weeks ago
experience15 to 19 Yrs
location
Maharashtra
skills
  • Java
  • Python
  • C
  • Microservices
  • Distributed Systems
  • Treasury
  • ALM
  • Risk Management
  • FX
  • Stakeholder Management
  • Buyside
  • Sellside
  • GenAI tools
  • MLAI
Job Description
As a Vice President (VP) in Mumbai, you will play a crucial role as an Engineering Manager with expertise in financial domains such as Treasury, Sell-side, or Buy-side. Your responsibilities will involve architecting and delivering cutting-edge Treasury & Markets platforms with a focus on high availability, resilience, and scalability. **Key Responsibilities:** - Lead and mentor a team of 10-20 engineers specializing in backend, frontend, and data engineering. - Stay actively involved in architecture, coding, and design reviews, emphasizing engineering excellence in clean code, CI/CD pipelines, observability, performance tuning, VAPT, and cloud-native best practices. - Translate complex business problems in Treasury & Markets into end-to-end technology solutions, focusing on funding, risk, liquidity, Fixed Incomes, MM/FX, and ALM. - Drive modernization initiatives including migration from monolith to microservices, cloud adoption (AWS/GCP/Azure), containerization (Kubernetes, Docker), and streaming (Kafka/Pulsar). - Utilize AI tools for rapid prototyping, code generation, and test automation, integrating ML/AI in cashflow forecasting, liquidity analytics, anomaly detection, and risk monitoring. - Foster an AI-first culture to enhance developer productivity and accelerate delivery. **Key Qualifications:** - Possess 15+ years of experience in software engineering, including 5+ years managing high-performing technology teams. - Strong coding skills in Java, Python, C++, with hands-on experience in microservices and distributed systems. - Practical exposure to modernization projects, cloud-native stacks, and building scalable APIs. - In-depth knowledge of Treasury, ALM, Risk, MM, FX, or Buy-side/Sell-side trading workflows. - Familiarity with GenAI-assisted coding and ML-driven solutions. - Proven track record of delivering mission-critical, low-latency, high-resilience platforms in the BFSI sector. - Excellent stakeholder management skills, collaborating closely with product owners, risk, operations, and global engineering teams. Please note that the above summary is tailored to the details provided in the job description.,
ACTIVELY HIRING
posted 2 months ago
experience3 to 7 Yrs
location
Maharashtra
skills
  • Cisco IDSIPS
Job Description
Role Overview: As an Intrusion Detection and Prevention Systems Engineer at Wipro Limited, your main responsibility will be to deploy, configure, and maintain Cisco Intrusion Detection and Prevention Systems (IDS/IPS) across enterprise environments. You will be analyzing network traffic patterns to detect anomalies and potential threats using Cisco IPS sensors and management tools. Additionally, you will perform initial setup, integration, and tuning of Cisco IPS sensors in alignment with customer security policies. Key Responsibilities: - Deploy, configure, and maintain Cisco Intrusion Detection and Prevention Systems (IDS/IPS) across enterprise environments - Analyze network traffic patterns to detect anomalies and potential threats using Cisco IPS sensors and management tools - Perform initial setup, integration, and tuning of Cisco IPS sensors in alignment with customer security policies - Apply and manage Cisco IPS security policies, including signature updates and response configuration - Collaborate with SOC teams to investigate and respond to security incidents - Document configurations, procedures, and incident reports for audit and compliance purposes - Participate in customer meetings and provide technical insights during security reviews - Strong understanding of intrusion detection/prevention concepts, traffic analysis, and evasion techniques - Experience with Cisco IPS software, hardware sensors, and Cisco Security Manager - Familiarity with IPS signature tuning, policy configuration, and sensor integration - Ability to troubleshoot IPS-related issues and coordinate with cross-functional teams - Experience with other security platforms (e.g., Palo Alto, Checkpoint) is a plus Qualification Required: - Mandatory Skills: Cisco IDS-IPS - Experience: 3-5 Years *No additional details about the company are present in the job description.,
ACTIVELY HIRING
posted 2 months ago
experience3 to 7 Yrs
location
Maharashtra
skills
  • data analytics
  • model development
  • validation
  • statistical modeling
  • mathematical modeling
  • compliance
  • data analysis
  • anomaly detection
  • machine learning
  • R
  • Python
  • Java
  • C
  • SQL
  • writing skills
  • global financial industry
  • operational risks
  • data assembling
  • computational statistics
Job Description
As an expert in data analytics, with a keen eye for detail and organizational skills, your role at UBS will involve: - Reviewing and challenging models used in operational risk management - Assessing the conceptual soundness and appropriateness of different models and conducting related outcome, impact, and benchmark analyses - Running analyses on implementations to assess correctness and stability - Carrying out and documenting independent model validation in alignment with regulatory requirements and internal standards - Interacting and discussing with model users, developers, senior owners, and governance bodies - Supporting regulatory exercises You will be a part of the Model Risk Management & Control function within the Group Chief Risk Officer organization at UBS. The team's focus is on understanding and assessing risks associated with model usage across the bank, ranging from operational risk capital determination to monitoring risks like money laundering and market manipulations. To excel in this role, you should possess: - A MSc or PhD degree in a quantitative field such as mathematics, physics, statistics, engineering, economics, or computer science - Experience in model development or validation of statistical/mathematical models - Familiarity with the global financial industry, compliance, and operational risks - Expertise in data assembling and analysis, computational statistics, anomaly detection, or machine learning, with relevant programming skills in R, Python, Java, C++, or SQL - Strong writing skills and a structured working style UBS is the world's largest and only truly global wealth manager, operating through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management, and the Investment Bank. With a presence in over 50 countries, UBS embraces flexible ways of working, offering part-time, job-sharing, and hybrid working arrangements. The company's purpose-led culture and global infrastructure facilitate collaboration and agile working to meet business needs. UBS is an Equal Opportunity Employer, valuing and empowering individuals with diverse cultures, perspectives, skills, and experiences within its workforce.,
ACTIVELY HIRING
posted 2 months ago
experience3 to 7 Yrs
location
Nagpur, Maharashtra
skills
  • data processing
  • Splunk
  • scripting
  • automation
  • Python
  • Bash
  • Ansible
  • monitoring
  • anomaly detection
  • analytical skills
  • communication skills
  • AIML frameworks
  • TensorFlow
  • PyTorch
  • scikitlearn
  • analytics tools
  • Prometheus
  • Grafana
  • ELK stack
  • cloud environments
  • infrastructure automation
  • selfhealing techniques
  • problemsolving skills
  • strategic planning skills
  • collaboration skills
Job Description
Role Overview: As a Machine Learning Engineer at the company, your primary role will be to apply machine learning algorithms to existing operational data (logs, metrics, events) to predict system failures and proactively address potential incidents. You will also be responsible for implementing automation for routine DevOps practices including automated scaling, resource optimization, and controlled restarts. Additionally, you will develop and maintain self-healing systems to reduce manual intervention and enhance system reliability. Your key responsibilities will include building anomaly detection models to quickly identify and address unusual operational patterns, collaborating closely with SREs, developers, and infrastructure teams to continuously enhance operational stability and performance, and providing insights and improvements through visualizations and reports leveraging AI-driven analytics. You will also be tasked with creating a phased roadmap to incrementally enhance operational capabilities and align with strategic business goals. Key Responsibilities: - Apply machine learning algorithms to predict system failures and address potential incidents - Implement automation for routine DevOps practices including scaling and optimization - Develop and maintain self-healing systems for enhanced reliability - Build anomaly detection models to identify unusual operational patterns - Collaborate with cross-functional teams to enhance operational stability and performance - Provide insights and improvements through visualizations and reports - Create a phased roadmap to enhance operational capabilities Qualifications Required: - Strong experience with AI/ML frameworks and tools (e.g., TensorFlow, PyTorch, scikit-learn) - Proficiency in data processing and analytics tools (e.g., Splunk, Prometheus, Grafana, ELK stack) - Solid background in scripting and automation (Python, Bash, Ansible, etc.) - Experience with cloud environments and infrastructure automation - Proven track record in implementing monitoring, anomaly detection, and self-healing techniques - Excellent analytical, problem-solving, and strategic planning skills - Strong communication skills and ability to collaborate effectively across teams,
ACTIVELY HIRING
posted 2 months ago
experience6 to 10 Yrs
location
Pune, Maharashtra
skills
  • Proactive Monitoring
  • Team Coordination
  • Communication
  • Documentation
  • Shift Management
  • Continuous Improvement
  • Monitoring Tools
  • Systems
  • Infrastructure
  • Analytical Skills
  • Interpersonal Skills
  • Python
  • Bash
  • CCNA
  • CCNP
  • Incident Detection
  • Network Protocols
  • ProblemSolving
  • ITIL Practices
  • Incident Command System ICS
Job Description
As a highly skilled engineer for the Digital Operations Center (DOC) team at CrowdStrike, you will be responsible for managing end-to-end incident lifecycles in a 24/7/365 environment. Your critical role will involve proactive monitoring, incident detection, team coordination, communication, documentation, shift management, and continuous improvement to ensure the stability, performance, and reliability of the IT network and product services. - Proactive Monitoring: Utilize internal dashboards to monitor support ticket volumes, IT infrastructure performance, and product/sensor health metrics to identify potential issues before they impact operations. - Incident Detection and Escalation: Swiftly identify anomalies or incidents, determine the severity, and escalate complex issues to Sr. DOC Analysts for resolution. - Team Coordination: Serve as the central point of contact for technology incidents, escalate issues to relevant teams, and ensure timely resolution. - Communication: Maintain clear communication with internal stakeholders, provide updates on ongoing incidents, and collaborate with Incident Commander for executive-level communication. - Documentation and Reporting: Keep detailed records of incidents, actions taken, and resolutions, contribute to post-incident reviews, and provide insights for continuous improvement. - Shift Management: Work in rotational shifts to provide 24/7 coverage, ensuring smooth handovers between shifts. - Continuous Improvement: Actively participate in refining monitoring tools, processes, and protocols to enhance the effectiveness of the Digital Operations Center. - Bachelors degree in Engineering, Computer Science, Information Technology, or a related field - Minimum 6 years of experience in Digital Operations Center (DOC) or similar environment like NOC or IT Operations - Experience with monitoring tools, network protocols, systems, and infrastructure - Strong problem-solving, analytical, communication, and interpersonal skills - Ability to work under pressure, manage multiple priorities, and proficiency in scripting languages like Python and Bash - Familiarity with ITIL practices, incident command system (ICS) principles, and industry certifications such as CCNA, CCNP - Willingness to work in a high-stress, fast-paced environment and in rotational shifts,
ACTIVELY HIRING
posted 2 months ago

Computer Vision Intern

Muks Robotics AI Pvt.Ltd.
experience0 to 4 Yrs
location
Pune, Maharashtra
skills
  • Algorithm design
  • Object detection
  • Segmentation
  • OCR
  • Deep learning
  • Data collection
  • Data cleaning
  • Python programming
  • Pose estimation
  • Data labelling
  • Data augmentation
  • Realtime deployment
  • Highperformance processing
Job Description
As a Computer Vision Intern at Muks Robotics AI Pvt. Ltd., you will play a crucial role in developing, optimizing, and deploying vision-based solutions for real-time, production-grade environments. Your responsibilities will include: - Assisting in developing computer vision pipelines for detection, tracking, segmentation, recognition, and measurement tasks. - Supporting the building of solutions for OCR, human pose estimation, activity recognition, and anomaly/defect detection. - Aiding in training, evaluating, and optimizing deep learning models for accuracy and performance. - Helping manage data collection, cleaning, labeling, and augmentation processes. - Supporting the optimization of systems for real-time deployment and high-performance processing. - Collaborating with hardware, firmware, and backend teams on deployment and integration tasks. - Documenting algorithms, experiments, and deployment workflows. Qualifications required for this role include: - Bachelors or Masters degree in Computer Science, AI/ML, Electronics, or a related field. - Familiarity with Python programming and basic algorithm design. - Exposure to computer vision tasks such as object detection, segmentation, OCR, or pose estimation. - Interest in deploying AI models on edge or server environments. - Enthusiasm for working with real-time video data and high-performance processing. - Willingness to learn and travel as required. It would be nice to have experience with multi-threaded or distributed processing, understanding of industrial communication protocols, and a background or interest in automation, safety monitoring, or real-time analytics. At Muks Robotics AI Pvt. Ltd., you will have the opportunity to work on high-impact AI systems, gain hands-on exposure to computer vision projects in real-world environments, and be part of a collaborative, innovation-focused workplace. (Note: Additional company details were not included in the provided job description.),
ACTIVELY HIRING
posted 2 months ago
experience11 to 17 Yrs
location
Pune, Maharashtra
skills
  • Power BI
  • Tableau
  • SQL
  • ETL tools
  • Data modeling
  • Python
  • R
Job Description
As a BI Architect at Persistent Ltd., you will be responsible for designing, developing, and implementing scalable and efficient Business Intelligence (BI) solutions using tools like Power BI, Tableau, and modern data platforms. Your role will involve closely working with stakeholders to understand business needs and translating them into actionable data insights and visualizations. **Key Responsibilities:** - Design and implement end-to-end BI solutions using Tableau and Power BI. - Develop semantic models, dashboards, and reports to support business decision-making. - Create and maintain data models, ETL pipelines, and data warehouses. - Optimize performance of BI solutions and ensure data accuracy and consistency. - For Senior Roles: Integrate AI/ML features such as Power BI Copilot, Smart Narratives, AutoML, and natural language querying. - For Senior Roles: Build predictive dashboards and anomaly detection models. - Work with cross-functional teams (finance, sales, operations) to gather requirements. - Implement data governance, security, and compliance frameworks. - Mentor junior BI developers and analysts. **Qualifications Required:** - Strong expertise in Power BI including Semantic models, DAX, Paginated Reports, Q&A visual. - Proficiency in Tableau covering Desktop, Server, Cloud, LOD Expressions. - Solid SQL skills and experience with relational databases (SQL Server, PostgreSQL, Oracle). - Experience with ETL tools (e.g., Talend, Informatica) and data modeling (Star/Snowflake schemas). - Familiarity with cloud platforms such as Azure, AWS, Google BigQuery. - Bonus: Knowledge of Python or R for data manipulation and AI/ML integration. Persistent Ltd. offers competitive salary and benefits package along with a culture focused on talent development. Quarterly growth opportunities and company-sponsored higher education and certifications are provided. You will have the chance to work with cutting-edge technologies and engage in employee initiatives like project parties, flexible work hours, and Long Service awards. Additionally, annual health check-ups and insurance coverage for self, spouse, children, and parents are included. Persistent Ltd. is committed to fostering diversity and inclusion in the workplace. The company supports hybrid work and flexible hours to accommodate diverse lifestyles. The office is accessibility-friendly with ergonomic setups and assistive technologies for employees with physical disabilities. If you have specific requirements due to disabilities, you are encouraged to inform during the application process or at any time during your employment. At Persistent, your potential will be unleashed. Join us at persistent.com/careers to explore exciting opportunities.,
ACTIVELY HIRING
posted 5 days ago
experience7 to 11 Yrs
location
Pune, Maharashtra
skills
  • ML
  • NLP
  • Anomaly Detection
  • Predictive Analytics
  • Vulnerability Management
  • Threat Intelligence
  • Endpoint Security
  • Cloud Security
  • Product Management
  • Business Acumen
  • Communication Skills
  • Innovation
  • AI
  • Agentic AI
  • Cybersecurity
  • DataDriven Decision Making
Job Description
Role Overview: You will join Qualys, a global leader in cybersecurity, as a visionary Senior Product Manager responsible for owning and driving the AI strategy and product development for the Qualys platform. Your role will involve leading AI-powered security solutions to protect enterprises worldwide, with a focus on advancing Agentic AI capabilities. Key Responsibilities: - Own the AI and Agentic AI product strategy and roadmap for the Qualys cybersecurity platform, emphasizing autonomous threat detection, intelligent automation, and predictive analytics. - Drive innovation in AI/ML and Agentic AI capabilities to enhance platform accuracy, operational efficiency, and user experience. - Act as the Business Owner for AI features, prioritizing initiatives that drive revenue growth, customer adoption, and platform differentiation. - Develop sales enablement materials, research the competitive landscape, and engage directly with customers to gather feedback and translate insights into actionable product requirements. - Lead cross-functional teams to deliver AI and Agentic AI features on time and with high quality. Qualifications Required: - Bachelor's degree in Computer Science, Engineering, Data Science, or related field; MBA or equivalent experience preferred. - 7+ years of product management experience, with at least 2 years focused on AI/ML products in cybersecurity or related domains. - Deep understanding of AI/ML technologies, including supervised/unsupervised learning, NLP, anomaly detection, predictive analytics, and Agentic AI concepts. - Proven track record of building and scaling AI-powered and Agentic AI-enabled security products or platforms. - Strong business acumen with experience in pricing, packaging, and go-to-market strategies for AI-driven solutions. - Excellent communication skills with the ability to articulate complex AI concepts to technical and non-technical audiences. - Passion for innovation, experimentation, and data-driven decision-making. - Familiarity with cybersecurity domains such as vulnerability management, threat intelligence, endpoint security, or cloud security is highly desirable. - Ability to thrive in a fast-paced, collaborative environment and lead cross-functional teams to success.,
ACTIVELY HIRING
logo

@ 2025 Shine.com | All Right Reserved

Connect with us:
  • LinkedIn
  • Instagram
  • Facebook
  • YouTube
  • Twitter