• IBM

Data Engineer

Jobs Description

Responsibilities:

 

● Design, build, and optimize data pipelines to ingest, process, transform, and load data from

various sources into our data platform

● Implement and maintain ETL workflows using tools like Debezium, Kafka, Airflow, and Jenkins to

ensure reliable and timely data processing

● Develop and optimize SQL and NoSQL database schemas, queries, and stored procedures for

efficient data retrieval and processing

● Work with both relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB,

DocumentDB) to build scalable data solutions

● Design and implement data warehouse solutions that support analytical needs and machine

learning applications

● Collaborate with data scientists and ML engineers to prepare data for AI/ML models and

implement data-driven features

● Implement data quality checks, monitoring, and alerting to ensure data accuracy and reliability

● Optimize query performance across various database systems through indexing, partitioning,

and query refactoring

● Develop and maintain documentation for data models, pipelines, and processes

● Collaborate with cross-functional teams to understand data requirements and deliver solutions

that meet business needs

● Stay current with emerging technologies and best practices in data engineering

Requirements:

 

 

● 6+ years of experience in data engineering or related roles with a proven track record of building

data pipelines and infrastructure

● Strong proficiency in SQL and experience with relational databases like MySQL and PostgreSQL

● Hands-on experience with NoSQL databases such as MongoDB or AWS DocumentDB

● Expertise in designing, implementing, and optimizing ETL processes using tools like Kafka,

Debezium, Airflow, or similar technologies

● Experience with data warehousing concepts and technologies

● Solid understanding of data modeling principles and best practices for both operational and

analytical systems

● Proven ability to optimize database performance, including query optimization, indexing

strategies, and database tuning

● Experience with AWS data services such as RDS, Redshift, S3, Glue, Kinesis, and ELK stack

● Proficiency in at least one programming language (Python, Node.js, Java)

● Experience with version control systems (Git) and CI/CD pipelines

● Bachelor's degree in Computer Science, Engineering, or related field

 

Job Description

Preferred Qualifications:

● Experience with graph databases (Neo4j, Amazon Neptune)

● Knowledge of big data technologies such as Hadoop, Spark, Hive, and data lake architectures

● Experience working with streaming data technologies and real-time data processing

● Familiarity with data governance and data security best practices

● Experience with containerization technologies (Docker, Kubernetes)

● Understanding of financial back-office operations and FinTech domain

● Experience working in a high-growth startup environment

● Master's degree in Computer Science, Data Engineering, or related field

 

Offered Salary

₹ Open

Job Details

  • 6-8 years of experience
  • 1 Openings
  • Open
  • Hydrabad

Subscribe to Our Newsletter!

Subscribe to get latest updates and information.

You can apply to this job and others using your online resume. Click the link below to submit your online resume and email your application to this employer.