84 Data Engineers jobs in Abu Dhabi
Big Data Engineer
Posted today
Job Viewed
Job Description
Apt Resources is seeking an experiencedBig Data Engineer for agovernment client in Abu Dhabi . You will design and implement large-scale data solutions to support AI/ML initiatives and public sector digital transformation.
Key Responsibilities:Data Pipeline Development :
- Build robust data pipelines usingPython SQL/NoSQL and Airflow
- DevelopETL/ELT processes for structured/unstructured data
- Managedata lakes and optimize storage solutions
Data Infrastructure :
- Design efficientdata models for analytics
- Implementdata governance and quality frameworks
- Work withcloud-based data platforms (Azure preferred)
AI/ML Support :
- Prepare and process datasets for machine learning applications
- Collaborate with ML teams on feature engineering
- 10-12 years of hands-on big data experience
- Expertise in:
- Python andSQL/NoSQL databases
- Airflow for workflow orchestration
- ETL/ELT pipeline development
- Cloud data platforms (Azure AWS or GCP)
To be discussed
#J-18808-LjbffrBig Data and AI Analytics Platform Leader
Posted today
Job Viewed
Job Description
Senior Product Manager for Big Data and AI Analytics Platform
We are seeking an experienced and strategic Senior Product Manager to lead the evolution and commercialization of our in-house Big Data & AI Analytics Platform.
This role requires a unique blend of deep technical expertise, strong product leadership, and a sharp commercial mindset.
The ideal candidate will be responsible for defining and driving the long-term vision and roadmap for our analytics platform, aligning it with evolving market trends in GenAI, BI, and big data processing.
Responsibilities:
- Define and drive the long-term vision and roadmap for our analytics platform.
- Build and lead a high-performing product team with strong technical and analytical capabilities.
- Establish and refine product processes across discovery, planning, delivery, and iteration.
- Drive cross-functional alignment across engineering, data, security, and go-to-market functions.
- Define and execute the go-to-market strategy including packaging, pricing, and competitive positioning.
Requirements:
To succeed in this role, you should have:
- 10+ years of product management experience, including 3–5 years in a senior management leadership role.
- A deep understanding of building data-intensive generic platforms driven by metadata. Proven success bringing analytics or big data processing or AI platforms to market.
- Commercial mindset with experience in GTM planning, pricing, and positioning.
- Deep understanding of data architectures: data lake, data warehouse, processing engines (e.g., Spark, Trino, ClickHouse), data governance, and security policies.
- Hands-on familiarity with AI/ML workflows including LLMs, embeddings, RAG, and model lifecycle integration.
What We Offer:
As a Senior Product Manager at our company, you will enjoy:
- A competitive remuneration package.
- Opportunities for personal growth and continuous learning.
- An open, diverse, and inclusive work environment.
Big Data Expert for Public Sector Transformation
Posted today
Job Viewed
Job Description
Apt Resources seeks an experienced data specialist to design and implement large-scale data solutions.
- Build robust data pipelines using Python, SQL/NoSQL, and Airflow.
- Develop ETL/ELT processes for structured/unstructured data.
- Manage data lakes and optimize storage solutions.
- Design efficient data models for analytics.
- Implement data governance and quality frameworks.
- Work with cloud-based data platforms (Azure preferred).
- Prepare and process datasets for machine learning applications.
- Collaborate with ML teams on feature engineering.
- 10-12 years of hands-on big data experience.
- Expertise in:
- Python and SQL/NoSQL databases.
- Airflow for workflow orchestration.
- ETL/ELT pipeline development.
- Cloud data platforms (Azure, AWS, or GCP).
The successful candidate will have a strong understanding of big data technologies and the ability to design and implement scalable data solutions.
This is a unique opportunity to work with a government client in Abu Dhabi and contribute to public sector digital transformation.
ML/Data Scientist and Backend Engineers
Posted today
Job Viewed
Job Description
Job Location: India
Number Of Positions: 6
Work Experience: 8-10 years
Profile Send by Date:
Job DescriptionOur client in Abu Dhabi is looking for the following:
- 2x Senior ML Engineers / Data Scientists
- 4x Software Engineers
ML & (OR) Data Scientist
- 5+ years of experience in AI, Machine Learning, RL, data science or a related field.
- Familiarity and proven experience in using LLMs/ fine-tuning LLMs.
- Master's or PhD in AI, Data Science, Statistics, Computer Science, or a related field.
- Proficient in programming languages such as Python or R. Candidates with awards in ACM/ICPC, NOI/IOI, Top Coder, Kaggle and other competitions are preferred.
- Strong analytical and statistical modeling skills.
- Experience with machine learning (Generative AI) frameworks (e.g., sci-kit-learn, TensorFlow, PyTorch, Langchain, Weaviate, Langgraph, LlamaIndex).
- Proven track record of applying data science to solve real-world problems.
- Excellent communication and collaboration skills.
Resume/CV*
Full Name*
Email Address*
Phone Number*
By submitting my profile, I agree to the Siri AB’s Privacy and GDPR Policy. I understand that my information will be used in accordance with GDPR regulations.
#J-18808-LjbffrML/Data Scientist and Backend Engineers
Posted today
Job Viewed
Job Description
Job Location: India
Number Of Positions: 6
Work Experience: 8-10 years
Profile Send by Date:
Job DescriptionOur client in Abu Dhabi is looking for the following:
- 2x Senior ML Engineers / Data Scientists
- 4x Software Engineers
ML & (OR) Data Scientist
- 5+ years of experience in AI, Machine Learning, RL, data science or a related field.
- Familiarity and proven experience in using LLMs/ fine-tuning LLMs.
- Master's or PhD in AI, Data Science, Statistics, Computer Science, or a related field.
- Proficient in programming languages such as Python or R. Candidates with awards in ACM/ICPC, NOI/IOI, Top Coder, Kaggle and other competitions are preferred.
- Strong analytical and statistical modeling skills.
- Experience with machine learning (Generative AI) frameworks (e.g., sci-kit-learn, TensorFlow, PyTorch, Langchain, Weaviate, Langgraph, LlamaIndex).
- Proven track record of applying data science to solve real-world problems.
- Excellent communication and collaboration skills.
Resume/CV*
Full Name*
Email Address*
Phone Number*
By submitting my profile, I agree to the Siri AB's Privacy and GDPR Policy. I understand that my information will be used in accordance with GDPR regulations.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
About the Role
We are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consultingengagements.
Skills and Experience requirements
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
2. Integration & Data Handling
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
3. WorkingStyle
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
Nice-to-haves
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
Candidate Requirements
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
Disclaimer:
This job posting is not open to recruitment agencies. Any candidate profile submitted by a recruitment agency will be considered as being received directly from an applicant. Contango reserves the rights to contact the candidate directly, without incurring any obligations or liabilities for payment of any fees to the recruitment agency.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
About the Role
We are an emerging AI-native product-driven, agile start-up under Abu Dhabi government AND we are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consulting engagements.
Skills and Experience requirements
1. Technical
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
2. Integration & Data Handling
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
3. Working Style
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
Nice-to-haves
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
Candidate Requirements
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
Job Type: Full-time
BenefitsVisa, Insurance, Yearly Flight Ticket, Bonus scheme, relocation logistics covered
Interviewing process consists of 2 or 3 technical/behavioral interviews
#J-18808-LjbffrBe The First To Know
About the latest Data engineers Jobs in Abu Dhabi !
Data Engineer
Posted today
Job Viewed
Job Description
We are looking for a Data Engineer in Abu Dhabi. Please find the following job description.
Key Skills / Requirement:
- Communication (Fluent) - Experience - 6 yrs +
- Modelling: Master Data, Reference Data, Business Key, Hashing Mechanism (Must)
- Frameworks: Metadata, DQ, Reconciliation, Error Handling
- ADF : Ingestion with and without CDC, Orchestration, Optimization with respect to load
- Databricks : Spark, performance optimizations, compute behavior, scenarios of Delta load, cost calculation, cataloguing, access provisioning, cluster management, debugging
- Autoloader & Delta ingestion: Processing files in thousands and records in millions (Must)
- Lakehouse functionalities vs Data Lake behavior
- Infra understanding: Basics like VM, Endpoints, Private Connection, Managed Identities, Subnet, VNet, Regions, Alerting
- Power BI : Connectivities, data sharing mechanism
- DevOps: CI/CD basic working mechanism, standards for ADF and Databricks, release pipeline behavior, DAB
- Healthcare / Health Insurance understanding (Good to have)
Mid-Senior level
Employment typeFull-time
Job functionInformation Technology
#J-18808-Ljbffr
Data Engineer
Posted today
Job Viewed
Job Description
About the Role
We are seeking a motivated and technically versatileData Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you'll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consultingengagements.
Skills and Experience requirements
- 1. Technical
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor's degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
2. Integration & Data Handling
3. WorkingStyle
Nice-to-haves
Candidate Requirements
Disclaimer:
This job posting is not open to recruitment agencies. Any candidate profile submitted by a recruitment agency will be considered as being received directly from an applicant. Contango reserves the rights to contact the candidate directly, without incurring any obligations or liabilities for payment of any fees to the recruitment agency.
#J-18808-LjbffrData Engineer
Posted today
Job Viewed
Job Description
About the Role
We are an emerging AI-native product-driven, agile start-up under Abu Dhabi government AND we are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.
As part of a stream-aligned delivery team, you'll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.
Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consulting engagements.
Skills and Experience requirements
1. Technical
- Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
- Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
- Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
- ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
- Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
- Git and CI/CD for notebooks, data pipelines, and deployments
2. Integration & Data Handling
- Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
- Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
- Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability
3. Working Style
- Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
- Able to explain technical decisions to teammates or clients
- Documents decisions and keeps stakeholders informed
- Comfortable seeking support from other teams for Product, Databricks, Data architecture
- Happy to collaborate with Data Science team on complex subsystems
Nice-to-haves
- MLflow or light MLOps experience (for the data science touchpoints)
- Dbt / dagster / airflow or similar transformation tools
- Understanding of security and compliance (esp. around client data)
- Past experience in consulting or client-facing roles
Candidate Requirements
- 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
- Bachelor's degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering
Job Type: Full-time
BenefitsVisa, Insurance, Yearly Flight Ticket, Bonus scheme, relocation logistics covered
Interviewing process consists of 2 or 3 technical/behavioral interviews
#J-18808-Ljbffr