524 Data Engineer jobs in the United Arab Emirates
Data Engineer
Posted today
Job Viewed
Job Description
The Data Engineer will be responsible for developing semantic models on top of the Data Lake/Data Warehouse to fulfill the self-service BI foundation requirements. This includes data extraction from various data sources and integration into the central data lake/data warehouse using enterprise platforms like Informatica iPaaS.
Key Responsibilities of Data Engineer- Designing data warehouse data models based on business requirements.
- Designing, developing, and testing both batch and real-time Extract, Transform and Load (ETL) processes required for data integration.
- Ingesting both structured and unstructured data into the SMBU data lake/data warehouse system.
- Designing and developing semantic models/self-service cubes.
- Performing BI administration and access management to ensure access and reports are properly governed.
- Performing unit testing and data validation to ensure business UAT is successful.
- Performing ad-hoc data analysis and presenting results in a clear manner.
- Assessing data quality of the source systems and proposing enhancements to achieve a satisfactory level of data accuracy.
- Optimizing ETL processes to ensure execution time meets requirements.
- Maintaining and architecting ETL pipelines to ensure data is loaded on time on a regular basis.
- 5 to 8 years of overall experience.
- Proven experience in the development of dimensional models in Azure Synapse with strong SQL knowledge.
- Minimum of 3 years working as a Data Engineer in the Azure ecosystem specifically using Synapse, ADF & Data bricks.
- Preferably 3 years of experience with data warehousing, ETL development, SQL Queries, Synapse, ADF, PySpark, and Informatica iPaaS for data ingestion & data modeling.
Data Engineer
Posted today
Job Viewed
Job Description
Job Summary:
We are looking for a talented and experienced Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining our data pipelines and infrastructure. You will work closely with our data scientists and analysts to ensure that our data is accessible, reliable, and secure.
Responsibilities:
- Design, develop, and maintain scalable data pipelines.
- Build and manage data warehouses and data lakes.
- Implement data quality and data governance best practices.
- Work with data scientists and analysts to support their research and development projects.
- Collaborate with other engineers to build and maintain our data infrastructure.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field
- 3-5 years of experience in a data engineering role
- Strong programming skills in Python, Java, or Scala
- Experience with big data technologies such as Hadoop, Spark, and Kafka
- Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform
- Experience with database systems such as SQL, PostgreSQL and Cassandra
- Experience with data warehousing and ETL (extract, transform, load) processes
- Excellent problem-solving and analytical skills
- Strong communication and teamwork skills
Bonus Points:
- Experience with SAS, Kubernetes, or Docker
- Experience with machine learning and artificial intelligence
- Experience with cloud-native development
Data Engineer
Posted today
Job Viewed
Job Description
Job Summary:
We are looking for a talented and experienced Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, building, and maintaining our data pipelines and infrastructure. You will work closely with our data scientists and analysts to ensure that our data is accessible, reliable, and secure.
Responsibilities:
- Design, develop, and maintain scalable data pipelines.
- Build and manage data warehouses and data lakes.
- Implement data quality and data governance best practices.
- Work with data scientists and analysts to support their research and development projects.
- Collaborate with other engineers to build and maintain our data infrastructure.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related field
- 3-5 years of experience in a data engineering role
- Strong programming skills in Python, Java, or Scala
- Experience with big data technologies such as Hadoop, Spark, and Kafka
- Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform
- Experience with database systems such as SQL, PostgreSQL and Cassandra
- Experience with data warehousing and ETL (extract, transform, load) processes
- Excellent problem-solving and analytical skills
- Strong communication and teamwork skills
Bonus Points:
- Experience with SAS, Kubernetes, or Docker
- Experience with machine learning and artificial intelligence
- Experience with cloud-native development
Data Engineer
Posted today
Job Viewed
Job Description
Location : Dubai
Who Can Apply: Candidates who are currently in Dubai
Job Type: Contract
Experience: Minimum 8+ years
Job Summary:
We are looking for an experienced Data Engineer to design, develop, and optimize data pipelines, ETL processes, and data integration solutions. The ideal candidate should have expertise in AWS cloud services, data engineering best practices, open-source tools, and data schema design. The role requires hands-on experience with large-scale data processing, real-time data streaming, and cloud-based data architectures.
Key Responsibilities:
- Develop and Maintain Data Pipelines to process structured and unstructured data efficiently.
- Implement ETL/ELT Workflows for batch and real-time data processing.
- Optimize Data Processing Workflows using distributed computing frameworks.
- Ensure Data Integrity and Quality through data validation, cleaning, and transformation techniques.
- Work with AWS Cloud Services , including S3, Redshift, Glue, Lambda, DynamoDB, and Kinesis.
- Leverage Open-Source Tools like Apache Spark, Airflow, Kafka, and Flink for data processing.
- Manage and Optimize Database Performance for both SQL and NoSQL environments.
- Collaborate with Data Scientists and Analysts to enable AI/ML model deployment and data accessibility.
- Support Data Migration Initiatives from on-premise to cloud-based data platforms.
- Ensure Compliance and Security Standards in handling sensitive and regulated data.
- Develop Data Models and Schemas for efficient storage and retrieval.
Required Skills & Qualifications:
- 8+ years of experience in data engineering, data architecture, and cloud computing.
- Strong knowledge of AWS Services such as Glue, Redshift, Athena, Lambda, and S3.
- Expertise in ETL Tools , including Talend, Apache NiFi, Informatica, dbt, and AWS Glue.
- Proficiency in Open-Source Tools such as Apache Spark, Hadoop, Airflow, Kafka, and Flink.
- Strong Programming Skills in Python, SQL, and Scala.
- Experience in Data Schema Design , normalization, and performance optimization.
- Knowledge of Real-time Data Streaming using Kafka, Kinesis, or Apache Flink.
- Experience in Data Warehouse and Data Lake Solutions .
- Hands-on experience with DevOps and CI/CD Pipelines for data engineering workflows.
- Understanding of AI and Machine Learning Data Pipelines .
- Strong analytical and problem-solving skills .
Preferred Qualifications:
- AWS Certified Data Analytics – Specialty or AWS Solutions Architect certification.
- Experience with Kubernetes, Docker, and serverless data processing.
- Exposure to MLOps and data engineering practices for AI/ML solutions.
- Experience with distributed computing and big data frameworks.
Data Engineer
Posted today
Job Viewed
Job Description
About the Role
We are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consultingengagements.
Skills and Experience requirements
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
2. Integration & Data Handling
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
3. WorkingStyle
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
Nice-to-haves
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
Candidate Requirements
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
Disclaimer:
This job posting is not open to recruitment agencies. Any candidate profile submitted by a recruitment agency will be considered as being received directly from an applicant. Contango reserves the rights to contact the candidate directly, without incurring any obligations or liabilities for payment of any fees to the recruitment agency.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
About the Role
We are an emerging AI-native product-driven, agile start-up under Abu Dhabi government AND we are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consulting engagements.
Skills and Experience requirements
1. Technical
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
2. Integration & Data Handling
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
3. Working Style
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
Nice-to-haves
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
Candidate Requirements
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
Job Type: Full-time
BenefitsVisa, Insurance, Yearly Flight Ticket, Bonus scheme, relocation logistics covered
Interviewing process consists of 2 or 3 technical/behavioral interviews
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Job description: -
- Design and automate data pipelines and ETL workflows for seamless ingestion, transformation, and integration.
- Build and manage batch and real-time data processing jobs to support analytical and operational needs.
- Write advanced SQL queries to extract, transform, and analyze large datasets across relational and non-relational systems.
- Design and develop data-driven applications and APIs, leveraging scalable technologies and backend frameworks.
- Integrate and manage diverse data sources, including relational databases, NoSQL systems, APIs, and streaming data.
- Ensure scalability, performance, and security of data applications through robust architectural and development practices.
- Monitor, debug, and resolve data and application issues across development, testing, and production environments.
- Collaborate with stakeholders to translate business needs into technical specifications and actionable data solutions.
- Develop, publish, and optimize interactive BI dashboards and reports using Power BI and Tableau(Plus)
- Continuously monitor and improve BI performance, data visualization effectiveness, and data reliability(Plus)
Job Specification and Technical Requirements: -
- Bachelor's or Master's degree in Computer Science, Information Systems, Data Engineering, or related fields.
- 4–6 years of hands-on experience in data application development, data integration, and business intelligence.
- Background in fintech or data-intensive industries is a plus.
- Proficient in Apache Spark, Python, and Apache Druid for scalable data processing and real-time analytics.
- Strong in SQL across databases like MySQL, PostgreSQL, SQL Server, etc.
- Familiar with NoSQL technologies, especially Couchbase.
- Experience with ETL tools such as Apache Airflow, Talend, or similar.
- Knowledge of streaming platforms like Apache Kafka for real-time data ingestion.
- Expertise in BI tools including Power BI and Tableau for data visualization and reporting will be plus
- Strong analytical thinking, problem-solving, and effective cross-functional collaboration skills
Job Types: Full-time, Permanent
Be The First To Know
About the latest Data engineer Jobs in United Arab Emirates !
Data Engineer
Posted today
Job Viewed
Job Description
Role Overview
The ETL Developer will be responsible for developing, maintaining, and optimizing SSIS-based ETL pipelines to support efficient and reliable data warehousing operations.
- ResponsibilitiesDesign and implement SSIS packages for seamless data integration across various systems.
- Optimize ETL pipelines to handle high-volume data loads efficiently.
- Develop and manage error handling and data cleansing workflows to ensure data quality.
- Collaborate with data teams to support data warehousing and reporting needs.
- Monitor and troubleshoot ETL processes to ensure performance and reliability.
- Experience & Skills RequiredExperience: Minimum of 4 years in ETL development.
- Skills:
- Expertise in SQL Server Integration Services (SSIS) for building and managing ETL processes.
- Proficiency in data transformation techniques and data warehousing concepts.
- Strong skills in performance tuning of ETL pipelines for optimal efficiency.
- Knowledge of SQL Server Analysis Services (SSAS) for multidimensional data modeling.
- Technology Stack: SQL Server, SSIS, SSAS.
- Strong problem-solving skills and attention to detail.
- Ability to work independently and in a team-oriented environment.
- QualificationsBachelor's degree in Computer Science, Information Systems, or a related field.
- Proven experience in designing and deploying ETL solutions in enterprise environments.
- Preferred Qualifications Experience with other ETL tools or cloud-based data platforms is a plus.
- Familiarity with T-SQL scripting and stored procedures.
Data Engineer
Posted today
Job Viewed
Job Description
Company Description
CLT Academy is dedicated to cultivating a new generation of skilled and confident traders, particularly serving the dynamic forex market. Based in Dubai, UAE, with global reach, we offer world-class trading education through our comprehensive courses, including 'Traders Edge' and 'Elite Trade Blue Print'. Our curriculum covers everything from basics to advanced strategies, risk management, and trading psychology, complemented by practical live market sessions and one-to-one mentorship. Backed by the New York School of Forex Trading, CLT Academy helps beginners, consistent traders, and professionals enhance their trading journey.
Role Description
This is a full-time, on-site role for a Data Engineer located in Dubai. The Data Engineer will be responsible for designing, developing, and managing robust data pipelines and architectures. Day-to-day tasks include transforming raw data into insightful and actionable information, maintaining data integrity and security, and optimizing data workflows. The role also involves data modeling, implementing ETL processes, and collaborating with different teams to support their data needs.
The Role
We are seeking a highly motivated and skilled Data Engineer with a minimum of 4 years of professional experience to join our dynamic team. The ideal candidate will have expertise in Azure,, Unity Catalog, advanced SQL, and at least intermediate proficiency in Python/PySpark. A deep understanding of the Medallion Architecture is a must, along with an interest in data analysis and familiarity with Power BI for driving actionable insights. Candidates with strong communication skills and experience in the fintech or banking industries will be given preference.
Our values Respect, Entrepreneurship, Imagination, Passion for the Customer are at the core of everything we do, and we're looking for someone who embodies these principles while delivering high-quality data solutions.
Responsibilities
* Implement solutions based on the Medallion Architecture to enable efficient organization and querying of data across bronze, silver, and gold layers.
* Write, optimize, and debug advanced SQL queries, ensuring high performance and reliability.
* Develop and maintain data transformation scripts using Python/PySpark.
* Collaborate with analysts to understand data requirements and assist with Power BI integrations for advanced visualization and reporting.
* Work closely with cross-functional teams to understand business needs and translate them into technical solutions.
* Ensure compliance with data governance standards and implement Unity Catalog to maintain centralized, secure, and efficient data management.
* Monitor and troubleshoot data pipelines, implementing adjustments as necessary to meet service-level agreements (SLAs).
* Stay updated on modern data engineering technologies, practices, and tools, continuously contributing to the team's technical knowledge.
Your Profile
* 2+ years of experience in data engineering or relevant roles.
* Expertise in working with Tableau, Power BI, Advance excel
* Proficiency in , including experience with Unity Catalog for data governance, access control, and metadata management.
* Strong skills in advanced SQL for complex query writing and optimization.
* Intermediate to advanced knowledge of Python and/or PySpark for data transformations.
* Exposure to data analysis techniques and tools (with Power BI being a plus).
* Excellent communication and collaboration skills to work closely with both technical and non-technical stakeholders.
* Strong problem-solving skills with a proactive approach to challenges.
Preferred Qualifications (Nice to Have):
* Industry experience in fintech or banking.
* Familiarity with security, compliance, and governance standards specific to financial data.
What We Value:
At our organization, we take pride in fostering a workplace culture based on these core values:
* Respect: For our colleagues, customers, and diverse perspectives.
* Entrepreneurship: Encouraging innovation and ownership over your work.
* Imagination: Approaching challenges with creativity and outside-the-box solutions.
* Passion for the Customer: Delivering solutions that prioritize customer needs while exceeding expectations
Data Engineer
Posted today
Job Viewed
Job Description
We are seeking a highly skilled and experienced *Data Engineer* to help shape and scale our supply chain and operations analytics infrastructure. In this role, you will work closely with cross-functional teams—including Operations, Finance, and Analytics—to design, build, and monitor scalable, production-grade data pipelines. Your work will be critical to driving data-informed decisions across the business.
What You'll Do:
- Develop and maintain automated ETL pipelines using Python, Snowflake SQL, and related technologies.
- Ensure robust data quality through unit testing, validation, and continuous monitoring.
- Collaborate with stakeholders to ingest and transform large healthcare datasets with accuracy and efficiency.
- Leverage AWS services such as S3, DynamoDB, Batch, and Step Functions for data integration and deployment.
- Optimize performance for pipelines processing large-scale datasets (1GB+).
- Translate business requirements into reliable, scalable data solutions.
- 4+ years of hands-on experience as a Data Engineer or in a similar role.
- Proven expertise in Python, SQL, and Snowflake for data engineering tasks.
- Strong experience building and maintaining production-grade ETL pipelines.
- Solid understanding of data validation, transformation, and debugging practices.
- Prior experience with *healthcare or claims datasets* is highly preferred.
- Practical knowledge of AWS technologies: S3, DynamoDB, Batch, Step Functions.
- Experience working with large datasets and complex data environments.
- Excellent verbal and written English communication skills.