We are looking for a skilled Data Engineer with 3–6 years of experience in building scalable data pipelines and analytics solutions using Azure Databricks, PySpark/Python, and SQL. The ideal candidate will have strong expertise in optimizing data workflows, ensuring data quality, and enabling actionable business insights through robust data engineering practices.
Primary Skills:
Azure Databricks – strong hands-on expertise
PySpark / Python – proficiency in writing scalable data transformations
SQL – advanced querying, transformations, and optimization skills
Building scalable data pipelines – design, implement, and maintain
Data workflow optimization – performance tuning and efficiency
Data quality assurance – ensuring accuracy, consistency, and reliability
Analytics solutions – enabling actionable insights from data
Preferred / Additional Skills:
Azure ecosystem knowledge (Data Lake, Synapse, Azure Functions, ADF)
Understanding of Big Data concepts and distributed systems (beyond Databricks)
Experience with orchestration tools (Airflow, Azure Data Factory, etc.)
Cloud DevOps practices (CI/CD for data pipelines, monitoring)
Knowledge of business intelligence and analytics integration
Exposure to data governance and security practices
Familiarity with other languages/tools (Scala, R, or Java for Spark)