59 Data Engineer jobs in the United Arab Emirates
Senior Data Engineer - Big Data/ Hadoop Ecosystem
Posted today
Job Viewed
Job Description
Job Title: Senior Data Engineer - Big Data/ Hadoop Ecosystem
Job Type: Full-time
Location: On-site Dubai, Dubai, United Arab Emirates
OverviewJoin our team as a Senior Data Engineer - Big Data/ Hadoop Ecosystem, where you will take the technical lead on pioneering data initiatives within the banking sector. Leveraging your expertise in the Hadoop ecosystem, you will architect, build, and optimize large-scale data systems while mentoring a talented team of data engineers. If you thrive in highly collaborative, asynchronous environments and are passionate about delivering robust data solutions, this role is for you.
Responsibilities- Design, develop, and optimize scalable data processing systems using the Hadoop ecosystem (HDFS, MapReduce, Hive, Pig, HBase, Flume, Sqoop) and other Big Data technologies.
- Lead, mentor, and inspire a team of data engineers, ensuring timely and high-quality project delivery.
- Engineer, tune, and maintain complex data pipelines in Java, MapReduce, Hive, and Spark, including implementing stream-processing with Spark-Streaming.
- Design and build efficient dimensional data models and scalable architectures to empower analytics and business intelligence.
- Oversee data integrity analysis, deployment, validation, and auditing of data models for accuracy and operational excellence.
- Leverage advanced SQL skills for performance tuning and optimization of data jobs.
- Collaborate with business intelligence teams to deliver industry-leading dashboards and data products.
- 10+ years of hands-on experience as a Big Data Engineer, with deep technical expertise in the Hadoop ecosystem (Cloudera preferred), Apache Spark, and distributed data frameworks.
- Proven experience leading backend/distributed data systems teams while remaining technically hands-on.
- Advanced proficiency in Java for MapReduce development, as well as strong skills in Python and/or Scala.
- Expertise in Big Data querying tools including Hive, Pig, and Impala.
- Strong experience with both relational (Postgres) and NoSQL databases (Cassandra, HBase).
- Solid understanding of dimensional data modeling and data warehousing principles.
- Proficient in Linux/Unix systems and shell scripting.
- Experience with Azure cloud services (Azure Data Lake, Databricks, HDInsight).
- Knowledge of stream-processing frameworks such as Spark-Streaming or Storm.
- Background in Financial Services or Banking industry, with exposure to data science and machine learning tools.
Data Engineer
Posted today
Job Viewed
Job Description
Property Monitor is the UAE’s leading real estate technology and market intelligence platform, recently acquired by Dubizzle Group. At Property Monitor, we empower developers, brokers, investors, and property professionals with authoritative data and powerful analytics, enabling them to make faster, smarter, and more informed decisions.
As part of Dubizzle Group, we are alongside five powerhouse brands - including market-leading platforms like Bayut and dubizzle trusted by over 123 million monthly users. Together, these brands shape how people buy, sell, and connect across real estate, classifieds, and services in the UAE and broader region.
The Data Engineer will help deliver world-class big data solutions and drive impact for the dubizzle business. You will be responsible for exciting projects covering the end-to-end data life cycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.
You will have the opportunity to build and work with both batch and real-time data processing pipelines. While working in a modern cloud-based data warehousing environment alongside a team of diverse, intense and interesting co-workers, you will liaise with other teams– such as product & tech, the core business verticals, trust & safety, finance and others – to enable them to be successful.
In this role, you will:
- Raw data integrations with primary and third-party systems
- Data warehouse modelling for operational and application data layers
- Development in Amazon Redshift cluster
- SQL development as part of agile team workflow
- ETL design and implementation in Matillion ETL
- Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
- Design and implementation of data products enabling data-driven features or business solutions
- Data quality, system stability and security
- Coding standards in SQL, Python, ETL design
- Building data dashboards and advanced visualisations in Periscope Data with a focus on UX, simplicity and usability
- Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising and others
- Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
- Be able to work autonomously without supervision on complex projects
- Participate in the early morning ETL status check rota
Requirements:
- Top of class technical degree such as computer science, engineering, math, physics.
- 3+ years of experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
- 2+ years of experience with one or more programming languages, especially Python
- Strong track record in business intelligence solutions, building and scaling data warehouses and data modelling
- Experience with modern big data ETL tools is a plus (e.g. Matillion)
- Experience with AWS data ecosystem (or other cloud providers)
- Experience with modern data visualization platforms such as Sisense (formerly Periscope Data), Google Data Studio, Tableau, MS Power BI etc.
- Knowledge of modern real-time data pipelines is a strong plus (e.g. server less framework, lambda, kinesis, etc.)
- Knowledge or relational relational and dimensional data models
- Knowledge of terminal operations and Linux workflows
- World-class SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
- Ability to communicate insights and findings to a non-technical audience
What We Offer:
- A fast paced, high performing team.
- Multicultural environment with over 50 different nationalities
- Competitive Tax-free Salary
- Comprehensive Health Insurance
- Annual Air Ticket Allowance
- Employee discounts at multiple vendors across the emirates
- Rewards & Recognitions
- Learning & Development
Dubizzle Group is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Property Monitor is the UAE’s leading real estate technology and market intelligence platform, recently acquired by Dubizzle Group. At Property Monitor, we empower developers, brokers, investors, and property professionals with authoritative data and powerful analytics, enabling them to make faster, smarter, and more informed decisions.
As part of Dubizzle Group, we are alongside five powerhouse brands - including market-leading platforms like Bayut and dubizzle trusted by over 123 million monthly users. Together, these brands shape how people buy, sell, and connect across real estate, classifieds, and services in the UAE and broader region.
The Data Engineer will help deliver world-class big data solutions and drive impact for the dubizzle business. You will be responsible for exciting projects covering the end-to-end data life cycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.
You will have the opportunity to build and work with both batch and real-time data processing pipelines. While working in a modern cloud-based data warehousing environment alongside a team of diverse, intense and interesting co-workers, you will liaise with other teams– such as product & tech, the core business verticals, trust & safety, finance and others – to enable them to be successful.
In this role, you will:
- Raw data integrations with primary and third-party systems
- Data warehouse modelling for operational and application data layers
- Development in Amazon Redshift cluster
- SQL development as part of agile team workflow
- ETL design and implementation in Matillion ETL
- Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
- Design and implementation of data products enabling data-driven features or business solutions
- Data quality, system stability and security
- Coding standards in SQL, Python, ETL design
- Building data dashboards and advanced visualisations in Periscope Data with a focus on UX, simplicity and usability
- Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising and others
- Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
- Be able to work autonomously without supervision on complex projects
- Participate in the early morning ETL status check rota
Requirements:
- Top of class technical degree such as computer science, engineering, math, physics.
- 3+ years of experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
- 2+ years of experience with one or more programming languages, especially Python
- Strong track record in business intelligence solutions, building and scaling data warehouses and data modelling
- Experience with modern big data ETL tools is a plus (e.g. Matillion)
- Experience with AWS data ecosystem (or other cloud providers)
- Experience with modern data visualization platforms such as Sisense (formerly Periscope Data), Google Data Studio, Tableau, MS Power BI etc.
- Knowledge of modern real-time data pipelines is a strong plus (e.g. server less framework, lambda, kinesis, etc.)
- Knowledge or relational relational and dimensional data models
- Knowledge of terminal operations and Linux workflows
- World-class SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
- Ability to communicate insights and findings to a non-technical audience
What We Offer:
- A fast paced, high performing team.
- Multicultural environment with over 50 different nationalities
- Competitive Tax-free Salary
- Comprehensive Health Insurance
- Annual Air Ticket Allowance
- Employee discounts at multiple vendors across the emirates
- Rewards & Recognitions
- Learning & Development
Dubizzle Group is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
8. Education / Qualifications / Professional Training
Bachelor’s Degree in Computer Science or Management with 4+ years of experience in Vertica and equivalent Databases
- Vertica Certification is a plus.
- Experience with data visualization tools (e.g., Power BI, SAP BI, SAS, Tableau) for data reporting and dashboard creation is beneficial.
Above 4 yrs exp required in Vertica Database Functionalities
8.3 Technical Competencies- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Proven experience as a Data Engineer or similar role with hands-on expertise in designing and managing data solutions in Vertica.
- Strong proficiency in SQL and experience with data modeling and schema design in Vertica.
- In-depth knowledge of ETL processes and tools, particularly for data integration into Vertica.
- Familiarity with other big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure) is advantageous.
- Understanding of data warehousing concepts and best practices.
- Experience in performance tuning and optimization of Vertica databases.
- Familiarity with Linux environments and shell scripting for data-related automation tasks is a plus.
- Excellent problem-solving skills and the ability to handle large datasets effectively.
- Strong communication and collaboration skills to work effectively within a team-oriented environment.
- Self-motivated, with the ability to work independently and manage multiple tasks and projects simultaneously.
Data Engineer
Posted today
Job Viewed
Job Description
About the Role
We are an emerging AI-native product-driven, agile start-up under Abu Dhabi government AND we are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consulting engagements.
Skills and Experience requirements
1. Technical
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
2. Integration & Data Handling
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
3. Working Style
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
Nice-to-haves
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
Candidate Requirements
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
Job Type: Full-time
BenefitsVisa, Insurance, Yearly Flight Ticket, Bonus scheme, relocation logistics covered
Interviewing process consists of 2 or 3 technical/behavioral interviews
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
About the Role
We are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consultingengagements.
Skills and Experience requirements
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
2. Integration & Data Handling
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
3. WorkingStyle
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
Nice-to-haves
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
Candidate Requirements
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
Disclaimer:
This job posting is not open to recruitment agencies. Any candidate profile submitted by a recruitment agency will be considered as being received directly from an applicant. Contango reserves the rights to contact the candidate directly, without incurring any obligations or liabilities for payment of any fees to the recruitment agency.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
Location : Dubai
Who Can Apply: Candidates who are currently in Dubai
Job Type: Contract
Experience: Minimum 8+ years
Job Summary:
We are looking for an experienced Data Engineer to design, develop, and optimize data pipelines, ETL processes, and data integration solutions. The ideal candidate should have expertise in AWS cloud services, data engineering best practices, open-source tools, and data schema design. The role requires hands-on experience with large-scale data processing, real-time data streaming, and cloud-based data architectures.
Key Responsibilities:
- Develop and Maintain Data Pipelines to process structured and unstructured data efficiently.
- Implement ETL/ELT Workflows for batch and real-time data processing.
- Optimize Data Processing Workflows using distributed computing frameworks.
- Ensure Data Integrity and Quality through data validation, cleaning, and transformation techniques.
- Work with AWS Cloud Services , including S3, Redshift, Glue, Lambda, DynamoDB, and Kinesis.
- Leverage Open-Source Tools like Apache Spark, Airflow, Kafka, and Flink for data processing.
- Manage and Optimize Database Performance for both SQL and NoSQL environments.
- Collaborate with Data Scientists and Analysts to enable AI/ML model deployment and data accessibility.
- Support Data Migration Initiatives from on-premise to cloud-based data platforms.
- Ensure Compliance and Security Standards in handling sensitive and regulated data.
- Develop Data Models and Schemas for efficient storage and retrieval.
Required Skills & Qualifications:
- 8+ years of experience in data engineering, data architecture, and cloud computing.
- Strong knowledge of AWS Services such as Glue, Redshift, Athena, Lambda, and S3.
- Expertise in ETL Tools , including Talend, Apache NiFi, Informatica, dbt, and AWS Glue.
- Proficiency in Open-Source Tools such as Apache Spark, Hadoop, Airflow, Kafka, and Flink.
- Strong Programming Skills in Python, SQL, and Scala.
- Experience in Data Schema Design , normalization, and performance optimization.
- Knowledge of Real-time Data Streaming using Kafka, Kinesis, or Apache Flink.
- Experience in Data Warehouse and Data Lake Solutions .
- Hands-on experience with DevOps and CI/CD Pipelines for data engineering workflows.
- Understanding of AI and Machine Learning Data Pipelines .
- Strong analytical and problem-solving skills .
Preferred Qualifications:
- AWS Certified Data Analytics – Specialty or AWS Solutions Architect certification.
- Experience with Kubernetes, Docker, and serverless data processing.
- Exposure to MLOps and data engineering practices for AI/ML solutions.
- Experience with distributed computing and big data frameworks.
Be The First To Know
About the latest Data engineer Jobs in United Arab Emirates !
Data Engineer
Posted today
Job Viewed
Job Description
The Data Engineer will be responsible for developing semantic models on top of the Data Lake/Data Warehouse to fulfill the self-service BI foundation requirements. This includes data extraction from various data sources and integration into the central data lake/data warehouse using enterprise platforms like Informatica iPaaS.
Key Responsibilities of Data Engineer- Designing data warehouse data models based on business requirements.
- Designing, developing, and testing both batch and real-time Extract, Transform and Load (ETL) processes required for data integration.
- Ingesting both structured and unstructured data into the SMBU data lake/data warehouse system.
- Designing and developing semantic models/self-service cubes.
- Performing BI administration and access management to ensure access and reports are properly governed.
- Performing unit testing and data validation to ensure business UAT is successful.
- Performing ad-hoc data analysis and presenting results in a clear manner.
- Assessing data quality of the source systems and proposing enhancements to achieve a satisfactory level of data accuracy.
- Optimizing ETL processes to ensure execution time meets requirements.
- Maintaining and architecting ETL pipelines to ensure data is loaded on time on a regular basis.
- 5 to 8 years of overall experience.
- Proven experience in the development of dimensional models in Azure Synapse with strong SQL knowledge.
- Minimum of 3 years working as a Data Engineer in the Azure ecosystem specifically using Synapse, ADF & Data bricks.
- Preferably 3 years of experience with data warehousing, ETL development, SQL Queries, Synapse, ADF, PySpark, and Informatica iPaaS for data ingestion & data modeling.
Python Engineer /Big Data
Posted today
Job Viewed
Job Description
Seeking a Python Engineer in Dubai to build scalable backend systems using Big Data tech, optimize systems, and collaborate across teams. Proficiency in cloud platforms and data engineering is key.
Description
Are you passionate about building scalable backend systems that handle vast amounts of data? Do you have a deep understanding of Python engineering and experience working with Big Data technologies? If so, we want to hear from you!
What You'll Do:
Design, build, and maintain scalable and robust backend services using Python.
Work on data pipelines, transforming large datasets into meaningful insights.
Collaborate with data scientists, engineers, and product teams to optimize system performance.
Leverage your knowledge in Big Data technologies (e.g., Snowflake, Hadoop, Spark, Kafka) to create data-driven solutions.
Ensure smooth data flow and storage across various systems, ensuring high availability and fault tolerance.
Continuously improve codebase quality through testing, peer review, and adherence to best practices.
What We're Looking For:
Proven experience as a Backend Engineer with a focus on Python.
Strong understanding of Big Data architectures and tools (e.g., Hadoop, Spark, Flink, etc.).
Experience with data engineering concepts, including ETL pipelines, data warehousing, and real-time streaming.
Proficiency in cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).
Solid experience with relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra).
Problem-solving mindset with a strong ability to debug and optimize code.
Excellent communication skills and a team player.
Bonus Points:
Familiarity with machine learning models and data science workflows.
Experience with RESTful API design and microservices architecture.
Knowledge of DevOps tools and CI/CD pipelines.
Python Engineer (Big Data)
Posted today
Job Viewed
Job Description
Python Engineer (Big Data) in Dubai: Design scalable backend systems, handle data pipelines, and use Big Data tech. Full-Time, experience required.
Description
Python Engineer (Big Data) in Dubai: Design scalable backend systems using Python, collaborate on data pipelines, and leverage Big Data tech like Hadoop and Spark. Experience required.
Location: Dubai, UAE
Type: Full-Time
Are you passionate about building scalable backend systems that handle vast amounts of data? Do you have a deep understanding of Python engineering and experience working with Big Data technologies? If so, we want to hear from you!
What You'll Do:
- Design, build, and maintain scalable and robust backend services using Python.
- Work on data pipelines, transforming large datasets into meaningful insights.
- Collaborate with data scientists, engineers, and product teams to optimize system performance.
- Leverage your knowledge in Big Data technologies (e.g., Snowflake, Hadoop, Spark, Kafka) to create data-driven solutions.
- Ensure smooth data flow and storage across various systems, ensuring high availability and fault tolerance.
- Continuously improve codebase quality through testing, peer review, and adherence to best practices.
What We're Looking For:
- Proven experience as a Backend Engineer with a focus on Python.
- Strong understanding of Big Data architectures and tools (e.g., Hadoop, Spark, Flink, etc.).
- Experience with data engineering concepts, including ETL pipelines, data warehousing, and real-time streaming.
- Proficiency in cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).
- Solid experience with relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra).
- Problem-solving mindset with a strong ability to debug and optimize code.
- Excellent communication skills and a team player.
Bonus Points:
- Familiarity with machine learning models and data science workflows.
- Experience with RESTful API design and microservices architecture.
- Knowledge of DevOps tools and CI/CD pipelines.
Interested?
#J-18808-Ljbffr
Explore numerous data engineer positions that involve designing, building, and managing data infrastructure. These roles require expertise in