Process Manager

eClerx
  • Posted On: 2026-03-18 15:42:53
  • Openings: 10
  • Applicants: 0
Job Description

Bring in industry best-practices around creating and maintaining robust data pipelines for complex data projects with / without AI component: o programmatically retrieve (unstructured mostly) data from several static and real-time sources (incl. web scraping, API use) o Structure this data into a structured format o Harmonize the data, into a common format and store it in a dedicated database. o Schedule the different jobs into a dedicated pipeline o rendering results through dynamic interfaces incl. web / mobile / dashboard with ability to log usage and granular user feedbacks o performance tuning and optimal implementation of complex Python scripts, SQL, • Industrialize ML / DL solutions and deploy and manage production services; proactively handle data issues arising on live apps • Perform ETL on large and complex datasets for AI applications - work closely with data scientists on performance optimization of large-scale ML/DL model finetuning • Build data tools to facilitate fast data cleaning and statistical analysis • Build and ensure data architecture is secure and compliant • Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability • Work closely with APAC IT Transformation and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris). You should be • Expert in structured and unstructured data in traditional and Big data environments – Oracle / SQLserver, MongoDB, Hive / Pig, BigQuery and Spark • Have excellent knowledge of Python programming both in traditional and distributed models (PySpark) • Expert in shell scripting and writing schedulers • Hands-on experience with Cloud - deploying complex data solutions in hybrid cloud / on-premise environment both for data extraction / storage and computation • Experience working on industry standard services like Message Queue, Redis, Elastic Search, Kafka, or Spark Streaming • Well versed with DevOps best practices like containerization, CICD pipeline (Jenkins and Maven) • Hands-on experience in deploying production apps using large volumes of data with state-of-the-art technologies like Dockers, Kubernetes and Kafka C2 - Internal Natixis • Strong knowledge of data security best practices • 10+ years’ experience in data engineering role • Graduate from a Tier-1 university • Knowledge of finance and experience in handling company annual reports would be greatly appreciated • And most importantly, you must be a passionate coder who really cares about building apps that can help us do things better, smarter and faster

More Info
Full Time
o
Not Disclosed
English
Not Disclosed
Education
Any Graduate
Not Disclosed
Required Skills
hive Kubernetes pyspark CI/Cd apache pig Redis sql docker

Contact Details
eClerx
+91 987654567
info@eclerx.com
  • Experience6+ years
  • Salary Above 10 LAKHS ANNUALLY
  • Location for Hiring Mumbai
  • Apply Now
Latest Job

Similar Jobs

Non Voice Process Customer Care Executive
TRIGENT SOFTWARE PRIVATE LIMITED
  • Mumbai
  • 4 Days
Disputes Lawyer
Law Firm
  • 3 years
  • Mumbai
  • 4 Days
Analyst
eClerx
  • Fresher
  • Mumbai
  • 4 Days
Fresher
Unifybrains Infotech
  • Fresher
  • Hyderabad
  • 4 Days