Data Engineer
Location: Bengaluru, Gurgaon, Pune
Job code: 101101
Posted on: Jan 10, 2025
About Us:
AceNet Consulting is a fast-growing global business and technology consulting firm specializing in business strategy, digital transformation, technology consulting, product development, start-up advisory and fund-raising services to our global clients across banking & financial services, healthcare, supply chain & logistics, consumer retail, manufacturing, eGovernance and other industry sectors.
We are looking for hungry, highly skilled and motivated individuals to join our dynamic team. If you’re passionate about technology and thrive in a fast-paced environment, we want to hear from you.
Job Summary:
We are seeking a skilled and detail-oriented Data Engineer to join our team. The ideal candidate will be proficient in developing, maintaining, and optimizing data pipelines using Scala, Spark, and cloud technologies, particularly on the Azure platform. This role involves ensuring data quality, resolving pipeline issues, and supporting data analysis activities.
Key Responsibilities:
*Proficiency in Scala and Spark: Solid understanding of Spark core concepts, including RDDs, DataFrames, SQL, and basic streaming.
*Azure Experience: Working knowledge of Azure Synapse Analytics, Azure Data Factory (ADF), and other relevant Azure data services.
*Data Engineering Fundamentals: Basic understanding of data modeling, data quality, and data governance concepts.
*SQL: Strong SQL skills for data querying, manipulation, and analysis.
*Version Control: Proficiency in using Git (e.g., GitHub) for source code management.
*CI/CD: Basic understanding of CI/CD concepts and experience with tools like GitHub Actions (desirable).
*Problem-solving & Troubleshooting: Ability to diagnose and resolve data pipeline issues effectively.
Role Requirements and Qualifications:
*Develop, maintain, and monitor data pipelines using Scala and Spark on the Azure platform.
*Ingest and transform data using Scala and SQL on Spark.
*Investigate and resolve data pipeline errors and performance issues.
*Identify and remediate data discrepancies, ambiguities, and inconsistencies.
*Provide technical support for data analysis activities.
*Source and version control code/configuration artifacts using GitHub.
*Deploy code artifacts using GitHub Workflow/actions.
*At least 2 year of hands on experience developing Scala/spark pipelines
*At least 2 years of hands on experience in handling large amount of big data using pyspark Ecosystems.
*At least 2 years of experience with Scala/Python/Java required
*At least 1 years of Azure or AWS development experience.
*At least 4 years of experience in Information Technology.
Why Join Us:
*Opportunities to work on transformative projects, cutting-edge technology and innovative solutions with leading global firms across industry sectors.
*Continuous investment in employee growth and professional development with a strong focus on up & re-skilling.
*Competitive compensation & benefits, ESOPs and international assignments.
*Supportive environment with healthy work-life balance and a focus on employee well-being.
*Open culture that values diverse perspectives, encourages transparent communication and rewards contributions.
How to Apply:
If you are interested in joining our team and meet the qualifications listed above, please submit your resume and a cover letter detailing your experience and why you are the ideal candidate for this position.