59 Data Engineer jobs in the United Arab Emirates

Senior Data Engineer - Big Data/ Hadoop Ecosystem

Dubai, Dubai micro1

Posted today

Job Viewed

Tap Again To Close

Job Description

Job Title: Senior Data Engineer - Big Data/ Hadoop Ecosystem

Job Type: Full-time

Location: On-site Dubai, Dubai, United Arab Emirates

Overview

Join our team as a Senior Data Engineer - Big Data/ Hadoop Ecosystem, where you will take the technical lead on pioneering data initiatives within the banking sector. Leveraging your expertise in the Hadoop ecosystem, you will architect, build, and optimize large-scale data systems while mentoring a talented team of data engineers. If you thrive in highly collaborative, asynchronous environments and are passionate about delivering robust data solutions, this role is for you.

Responsibilities
  • Design, develop, and optimize scalable data processing systems using the Hadoop ecosystem (HDFS, MapReduce, Hive, Pig, HBase, Flume, Sqoop) and other Big Data technologies.
  • Lead, mentor, and inspire a team of data engineers, ensuring timely and high-quality project delivery.
  • Engineer, tune, and maintain complex data pipelines in Java, MapReduce, Hive, and Spark, including implementing stream-processing with Spark-Streaming.
  • Design and build efficient dimensional data models and scalable architectures to empower analytics and business intelligence.
  • Oversee data integrity analysis, deployment, validation, and auditing of data models for accuracy and operational excellence.
  • Leverage advanced SQL skills for performance tuning and optimization of data jobs.
  • Collaborate with business intelligence teams to deliver industry-leading dashboards and data products.
Qualifications
  • 10+ years of hands-on experience as a Big Data Engineer, with deep technical expertise in the Hadoop ecosystem (Cloudera preferred), Apache Spark, and distributed data frameworks.
  • Proven experience leading backend/distributed data systems teams while remaining technically hands-on.
  • Advanced proficiency in Java for MapReduce development, as well as strong skills in Python and/or Scala.
  • Expertise in Big Data querying tools including Hive, Pig, and Impala.
  • Strong experience with both relational (Postgres) and NoSQL databases (Cassandra, HBase).
  • Solid understanding of dimensional data modeling and data warehousing principles.
  • Proficient in Linux/Unix systems and shell scripting.
Preferred Qualifications
  • Experience with Azure cloud services (Azure Data Lake, Databricks, HDInsight).
  • Knowledge of stream-processing frameworks such as Spark-Streaming or Storm.
  • Background in Financial Services or Banking industry, with exposure to data science and machine learning tools.
#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Dubai, Dubai Bayut | dubizzle

Posted today

Job Viewed

Tap Again To Close

Job Description

Property Monitor is the UAE’s leading real estate technology and market intelligence platform, recently acquired by Dubizzle Group. At Property Monitor, we empower developers, brokers, investors, and property professionals with authoritative data and powerful analytics, enabling them to make faster, smarter, and more informed decisions.

As part of Dubizzle Group, we are alongside five powerhouse brands - including market-leading platforms like Bayut and dubizzle trusted by over 123 million monthly users. Together, these brands shape how people buy, sell, and connect across real estate, classifieds, and services in the UAE and broader region.

The Data Engineer will help deliver world-class big data solutions and drive impact for the dubizzle business. You will be responsible for exciting projects covering the end-to-end data life cycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.

You will have the opportunity to build and work with both batch and real-time data processing pipelines. While working in a modern cloud-based data warehousing environment alongside a team of diverse, intense and interesting co-workers, you will liaise with other teams– such as product & tech, the core business verticals, trust & safety, finance and others – to enable them to be successful.

In this role, you will:

  • Raw data integrations with primary and third-party systems
  • Data warehouse modelling for operational and application data layers
  • Development in Amazon Redshift cluster
  • SQL development as part of agile team workflow
  • ETL design and implementation in Matillion ETL
  • Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
  • Design and implementation of data products enabling data-driven features or business solutions
  • Data quality, system stability and security
  • Coding standards in SQL, Python, ETL design
  • Building data dashboards and advanced visualisations in Periscope Data with a focus on UX, simplicity and usability
  • Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising and others
  • Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
  • Be able to work autonomously without supervision on complex projects
  • Participate in the early morning ETL status check rota

Requirements:

  • Top of class technical degree such as computer science, engineering, math, physics.
  • 3+ years of experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
  • 2+ years of experience with one or more programming languages, especially Python
  • Strong track record in business intelligence solutions, building and scaling data warehouses and data modelling
  • Experience with modern big data ETL tools is a plus (e.g. Matillion)
  • Experience with AWS data ecosystem (or other cloud providers)
  • Experience with modern data visualization platforms such as Sisense (formerly Periscope Data), Google Data Studio, Tableau, MS Power BI etc.
  • Knowledge of modern real-time data pipelines is a strong plus (e.g. server less framework, lambda, kinesis, etc.)
  • Knowledge or relational relational and dimensional data models
  • Knowledge of terminal operations and Linux workflows
  • World-class SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
  • Ability to communicate insights and findings to a non-technical audience

What We Offer:

  • A fast paced, high performing team.
  • Multicultural environment with over 50 different nationalities
  • Competitive Tax-free Salary
  • Comprehensive Health Insurance
  • Annual Air Ticket Allowance
  • Employee discounts at multiple vendors across the emirates
  • Rewards & Recognitions
  • Learning & Development

Dubizzle Group is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Dubai, Dubai Dubizzle Group

Posted today

Job Viewed

Tap Again To Close

Job Description

Property Monitor is the UAE’s leading real estate technology and market intelligence platform, recently acquired by Dubizzle Group. At Property Monitor, we empower developers, brokers, investors, and property professionals with authoritative data and powerful analytics, enabling them to make faster, smarter, and more informed decisions.

As part of Dubizzle Group, we are alongside five powerhouse brands - including market-leading platforms like Bayut and dubizzle trusted by over 123 million monthly users. Together, these brands shape how people buy, sell, and connect across real estate, classifieds, and services in the UAE and broader region.

The Data Engineer will help deliver world-class big data solutions and drive impact for the dubizzle business. You will be responsible for exciting projects covering the end-to-end data life cycle – from raw data integrations with primary and third-party systems, through advanced data modeling, to state-of-the-art data visualization and development of innovative data products.

You will have the opportunity to build and work with both batch and real-time data processing pipelines. While working in a modern cloud-based data warehousing environment alongside a team of diverse, intense and interesting co-workers, you will liaise with other teams– such as product & tech, the core business verticals, trust & safety, finance and others – to enable them to be successful.

In this role, you will:

  • Raw data integrations with primary and third-party systems
  • Data warehouse modelling for operational and application data layers
  • Development in Amazon Redshift cluster
  • SQL development as part of agile team workflow
  • ETL design and implementation in Matillion ETL
  • Real-time data pipelines and applications using serverless and managed AWS services such as Lambda, Kinesis, API Gateway, etc.
  • Design and implementation of data products enabling data-driven features or business solutions
  • Data quality, system stability and security
  • Coding standards in SQL, Python, ETL design
  • Building data dashboards and advanced visualisations in Periscope Data with a focus on UX, simplicity and usability
  • Working with other departments on data products – i.e. product & technology, marketing & growth, finance, core business, advertising and others
  • Being part and contributing towards a strong team culture and ambition to be on the cutting edge of big data
  • Be able to work autonomously without supervision on complex projects
  • Participate in the early morning ETL status check rota

Requirements:

  • Top of class technical degree such as computer science, engineering, math, physics.
  • 3+ years of experience working with customer-centric data at big data-scale, preferably in an online / e-commerce context
  • 2+ years of experience with one or more programming languages, especially Python
  • Strong track record in business intelligence solutions, building and scaling data warehouses and data modelling
  • Experience with modern big data ETL tools is a plus (e.g. Matillion)
  • Experience with AWS data ecosystem (or other cloud providers)
  • Experience with modern data visualization platforms such as Sisense (formerly Periscope Data), Google Data Studio, Tableau, MS Power BI etc.
  • Knowledge of modern real-time data pipelines is a strong plus (e.g. server less framework, lambda, kinesis, etc.)
  • Knowledge or relational relational and dimensional data models
  • Knowledge of terminal operations and Linux workflows
  • World-class SQL skills across a variety of relational data warehousing technologies especially in cloud data warehousing (e.g. Amazon Redshift, Google BigQuery, Snowflake, Vertica, etc.)
  • Ability to communicate insights and findings to a non-technical audience

What We Offer:

  • A fast paced, high performing team.
  • Multicultural environment with over 50 different nationalities
  • Competitive Tax-free Salary
  • Comprehensive Health Insurance
  • Annual Air Ticket Allowance
  • Employee discounts at multiple vendors across the emirates
  • Rewards & Recognitions
  • Learning & Development

Dubizzle Group is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Dubai, Dubai Theintechgroup

Posted today

Job Viewed

Tap Again To Close

Job Description

8. Education / Qualifications / Professional Training

Bachelor’s Degree in Computer Science or Management with 4+ years of experience in Vertica and equivalent Databases

  • Vertica Certification is a plus.
  • Experience with data visualization tools (e.g., Power BI, SAP BI, SAS, Tableau) for data reporting and dashboard creation is beneficial.
8.2 Work Experience

Above 4 yrs exp required in Vertica Database Functionalities

8.3 Technical Competencies
  1. Bachelor’s degree in Computer Science, Information Technology, or a related field.
  2. Proven experience as a Data Engineer or similar role with hands-on expertise in designing and managing data solutions in Vertica.
  3. Strong proficiency in SQL and experience with data modeling and schema design in Vertica.
  4. In-depth knowledge of ETL processes and tools, particularly for data integration into Vertica.
  5. Familiarity with other big data technologies (e.g., Hadoop, Spark) and cloud platforms (e.g., AWS, Azure) is advantageous.
  6. Understanding of data warehousing concepts and best practices.
  7. Experience in performance tuning and optimization of Vertica databases.
  8. Familiarity with Linux environments and shell scripting for data-related automation tasks is a plus.
  9. Excellent problem-solving skills and the ability to handle large datasets effectively.
  10. Strong communication and collaboration skills to work effectively within a team-oriented environment.
  11. Self-motivated, with the ability to work independently and manage multiple tasks and projects simultaneously.
#J-18808-Ljbffr

This advertiser has chosen not to accept applicants from your region.

Data Engineer

Abu Dhabi, Abu Dhabi Contango

Posted today

Job Viewed

Tap Again To Close

Job Description

Tasks

About the Role

We are an emerging AI-native product-driven, agile start-up under Abu Dhabi government AND we are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.

As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.

Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consulting engagements.

Skills and Experience requirements
1. Technical

  • Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
  • Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
  • Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
  • ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
  • Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
  • Git and CI/CD for notebooks, data pipelines, and deployments

2. Integration & Data Handling

  • Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
  • Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
  • Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability

3. Working Style

  • Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
  • Able to explain technical decisions to teammates or clients
  • Documents decisions and keeps stakeholders informed
  • Comfortable seeking support from other teams for Product, Databricks, Data architecture
  • Happy to collaborate with Data Science team on complex subsystems
Requirements

Nice-to-haves

  • MLflow or light MLOps experience (for the data science touchpoints)
  • Dbt / dagster / airflow or similar transformation tools
  • Understanding of security and compliance (esp. around client data)
  • Past experience in consulting or client-facing roles

Candidate Requirements

  • 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
  • Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering

Job Type: Full-time

Benefits

Visa, Insurance, Yearly Flight Ticket, Bonus scheme, relocation logistics covered

Interviewing process consists of 2 or 3 technical/behavioral interviews

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Abu Dhabi, Abu Dhabi Contango

Posted today

Job Viewed

Tap Again To Close

Job Description

About the Role

We are seeking a motivated and technically versatile Data Engineer to join our team. You will play a key role in delivering data platforms, pipelines, and ML enablement within a Databricks on Azure environment.

As part of a stream-aligned delivery team, you’ll work closely with Data Scientists, Architects, and Product Managers to build scalable, high-quality data solutions for clients. You'll be empowered by a collaborative environment that values continuous learning, Agile best practices, and technical excellence.

Ideal candidates have strong hands-on experience in Databricks, Python, ADF and are comfortable in fast-paced, client-facing consultingengagements.

Skills and Experience requirements

1. Technical
  • Databricks (or similar) e.g. Notebooks (Python, SQL), Delta Lake, job scheduling, clusters, and workspace management, Unity Catalog, access control awareness
  • Cloud data engineering – ideally Azure, including storage (e.g., ADLS, S3, ADLS), compute, and secrets management
  • Development languages such as Python, SQL, C#, javascript etc. especially data ingestion, cleaning, and transformation
  • ETL / ELT – including structured logging, error handling, reprocessing strategies, APIs, flat files, databases, message queues, event streaming, event sourcing etc.
  • Automated testing (ideally TDD), pairing/mobbing. Trunk Based Development, Continuous Deployment and Infrastructure-as-Code (Terraform)
  • Git and CI/CD for notebooks, data pipelines, and deployments

2. Integration & Data Handling

  • Experienced in delivering platforms for clients – including file transfer, APIS (REST etc.), SQL/NoSQL/graph databases, JSON, CSV, XML, Parquet etc
  • Data validation and profiling - assess incoming data quality. Cope with schema drift, deduplication, and reconciliation
  • Testing and monitoring pipelines: Unit tests for transformations, data checks, and pipeline observability

3. WorkingStyle

  • Comfortable leveraging the best of lean, agile and waterfall approaches. Can contribute to planning, estimation, and documentation, but also collaborative daily re-prioritisation
  • Able to explain technical decisions to teammates or clients
  • Documents decisions and keeps stakeholders informed
  • Comfortable seeking support from other teams for Product, Databricks, Data architecture
  • Happy to collaborate with Data Science team on complex subsystems

Nice-to-haves

  • MLflow or light MLOps experience (for the data science touchpoints)
  • Dbt / dagster / airflow or similar transformation tools
  • Understanding of security and compliance (esp. around client data)
  • Past experience in consulting or client-facing roles

Candidate Requirements

  • 5–8 years (minimum 3–4 years hands-on with cloud/data engineering, 1–2 years in Databricks/Azure, and team/project leadership exposure)
  • Bachelor’s degree in Computer Science, Data Engineering, Software Engineering, Information Systems, Data Engineering

Disclaimer:

This job posting is not open to recruitment agencies. Any candidate profile submitted by a recruitment agency will be considered as being received directly from an applicant. Contango reserves the rights to contact the candidate directly, without incurring any obligations or liabilities for payment of any fees to the recruitment agency.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Data Engineer

Dubai, Dubai Teliolabs Communication Private Limited

Posted today

Job Viewed

Tap Again To Close

Job Description

Location : Dubai

Who Can Apply: Candidates who are currently in Dubai
Job Type: Contract
Experience: Minimum 8+ years

Job Summary:

We are looking for an experienced Data Engineer to design, develop, and optimize data pipelines, ETL processes, and data integration solutions. The ideal candidate should have expertise in AWS cloud services, data engineering best practices, open-source tools, and data schema design. The role requires hands-on experience with large-scale data processing, real-time data streaming, and cloud-based data architectures.

Key Responsibilities:

  • Develop and Maintain Data Pipelines to process structured and unstructured data efficiently.
  • Implement ETL/ELT Workflows for batch and real-time data processing.
  • Optimize Data Processing Workflows using distributed computing frameworks.
  • Ensure Data Integrity and Quality through data validation, cleaning, and transformation techniques.
  • Work with AWS Cloud Services , including S3, Redshift, Glue, Lambda, DynamoDB, and Kinesis.
  • Leverage Open-Source Tools like Apache Spark, Airflow, Kafka, and Flink for data processing.
  • Manage and Optimize Database Performance for both SQL and NoSQL environments.
  • Collaborate with Data Scientists and Analysts to enable AI/ML model deployment and data accessibility.
  • Support Data Migration Initiatives from on-premise to cloud-based data platforms.
  • Ensure Compliance and Security Standards in handling sensitive and regulated data.
  • Develop Data Models and Schemas for efficient storage and retrieval.

Required Skills & Qualifications:

  • 8+ years of experience in data engineering, data architecture, and cloud computing.
  • Strong knowledge of AWS Services such as Glue, Redshift, Athena, Lambda, and S3.
  • Expertise in ETL Tools , including Talend, Apache NiFi, Informatica, dbt, and AWS Glue.
  • Proficiency in Open-Source Tools such as Apache Spark, Hadoop, Airflow, Kafka, and Flink.
  • Strong Programming Skills in Python, SQL, and Scala.
  • Experience in Data Schema Design , normalization, and performance optimization.
  • Knowledge of Real-time Data Streaming using Kafka, Kinesis, or Apache Flink.
  • Experience in Data Warehouse and Data Lake Solutions .
  • Hands-on experience with DevOps and CI/CD Pipelines for data engineering workflows.
  • Understanding of AI and Machine Learning Data Pipelines .
  • Strong analytical and problem-solving skills .

Preferred Qualifications:

  • AWS Certified Data Analytics – Specialty or AWS Solutions Architect certification.
  • Experience with Kubernetes, Docker, and serverless data processing.
  • Exposure to MLOps and data engineering practices for AI/ML solutions.
  • Experience with distributed computing and big data frameworks.
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
Be The First To Know

About the latest Data engineer Jobs in United Arab Emirates !

Data Engineer

Dubai, Dubai Everythinginclick

Posted today

Job Viewed

Tap Again To Close

Job Description

The Data Engineer will be responsible for developing semantic models on top of the Data Lake/Data Warehouse to fulfill the self-service BI foundation requirements. This includes data extraction from various data sources and integration into the central data lake/data warehouse using enterprise platforms like Informatica iPaaS.

Key Responsibilities of Data Engineer
  1. Designing data warehouse data models based on business requirements.
  2. Designing, developing, and testing both batch and real-time Extract, Transform and Load (ETL) processes required for data integration.
  3. Ingesting both structured and unstructured data into the SMBU data lake/data warehouse system.
  4. Designing and developing semantic models/self-service cubes.
  5. Performing BI administration and access management to ensure access and reports are properly governed.
  6. Performing unit testing and data validation to ensure business UAT is successful.
  7. Performing ad-hoc data analysis and presenting results in a clear manner.
  8. Assessing data quality of the source systems and proposing enhancements to achieve a satisfactory level of data accuracy.
  9. Optimizing ETL processes to ensure execution time meets requirements.
  10. Maintaining and architecting ETL pipelines to ensure data is loaded on time on a regular basis.
Qualification Required for Data Engineer
  1. 5 to 8 years of overall experience.
  2. Proven experience in the development of dimensional models in Azure Synapse with strong SQL knowledge.
  3. Minimum of 3 years working as a Data Engineer in the Azure ecosystem specifically using Synapse, ADF & Data bricks.
  4. Preferably 3 years of experience with data warehousing, ETL development, SQL Queries, Synapse, ADF, PySpark, and Informatica iPaaS for data ingestion & data modeling.
#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Python Engineer /Big Data

Dubai, Dubai Forte Digital Poland

Posted today

Job Viewed

Tap Again To Close

Job Description

Seeking a Python Engineer in Dubai to build scalable backend systems using Big Data tech, optimize systems, and collaborate across teams. Proficiency in cloud platforms and data engineering is key.

Description

Are you passionate about building scalable backend systems that handle vast amounts of data? Do you have a deep understanding of Python engineering and experience working with Big Data technologies? If so, we want to hear from you!

What You'll Do:
Design, build, and maintain scalable and robust backend services using Python.
Work on data pipelines, transforming large datasets into meaningful insights.
Collaborate with data scientists, engineers, and product teams to optimize system performance.
Leverage your knowledge in Big Data technologies (e.g., Snowflake, Hadoop, Spark, Kafka) to create data-driven solutions.
Ensure smooth data flow and storage across various systems, ensuring high availability and fault tolerance.
Continuously improve codebase quality through testing, peer review, and adherence to best practices.

What We're Looking For:
Proven experience as a Backend Engineer with a focus on Python.
Strong understanding of Big Data architectures and tools (e.g., Hadoop, Spark, Flink, etc.).
Experience with data engineering concepts, including ETL pipelines, data warehousing, and real-time streaming.
Proficiency in cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).
Solid experience with relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra).
Problem-solving mindset with a strong ability to debug and optimize code.
Excellent communication skills and a team player.

Bonus Points:
Familiarity with machine learning models and data science workflows.
Experience with RESTful API design and microservices architecture.
Knowledge of DevOps tools and CI/CD pipelines.

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.

Python Engineer (Big Data)

Dubai, Dubai GW LTD

Posted today

Job Viewed

Tap Again To Close

Job Description

Python Engineer (Big Data) in Dubai: Design scalable backend systems, handle data pipelines, and use Big Data tech. Full-Time, experience required.

Description

Python Engineer (Big Data) in Dubai: Design scalable backend systems using Python, collaborate on data pipelines, and leverage Big Data tech like Hadoop and Spark. Experience required.

Location: Dubai, UAE
Type: Full-Time

Are you passionate about building scalable backend systems that handle vast amounts of data? Do you have a deep understanding of Python engineering and experience working with Big Data technologies? If so, we want to hear from you!

What You'll Do:

  1. Design, build, and maintain scalable and robust backend services using Python.
  2. Work on data pipelines, transforming large datasets into meaningful insights.
  3. Collaborate with data scientists, engineers, and product teams to optimize system performance.
  4. Leverage your knowledge in Big Data technologies (e.g., Snowflake, Hadoop, Spark, Kafka) to create data-driven solutions.
  5. Ensure smooth data flow and storage across various systems, ensuring high availability and fault tolerance.
  6. Continuously improve codebase quality through testing, peer review, and adherence to best practices.

What We're Looking For:

  1. Proven experience as a Backend Engineer with a focus on Python.
  2. Strong understanding of Big Data architectures and tools (e.g., Hadoop, Spark, Flink, etc.).
  3. Experience with data engineering concepts, including ETL pipelines, data warehousing, and real-time streaming.
  4. Proficiency in cloud platforms (AWS, GCP, Azure) and containerization technologies (Docker, Kubernetes).
  5. Solid experience with relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra).
  6. Problem-solving mindset with a strong ability to debug and optimize code.
  7. Excellent communication skills and a team player.

Bonus Points:

  1. Familiarity with machine learning models and data science workflows.
  2. Experience with RESTful API design and microservices architecture.
  3. Knowledge of DevOps tools and CI/CD pipelines.

Interested?

#J-18808-Ljbffr
This advertiser has chosen not to accept applicants from your region.
 

Nearby Locations

Other Jobs Near Me

Industry

  1. request_quote Accounting
  2. work Administrative
  3. eco Agriculture Forestry
  4. smart_toy AI & Emerging Technologies
  5. school Apprenticeships & Trainee
  6. apartment Architecture
  7. palette Arts & Entertainment
  8. directions_car Automotive
  9. flight_takeoff Aviation
  10. account_balance Banking & Finance
  11. local_florist Beauty & Wellness
  12. restaurant Catering
  13. volunteer_activism Charity & Voluntary
  14. science Chemical Engineering
  15. child_friendly Childcare
  16. foundation Civil Engineering
  17. clean_hands Cleaning & Sanitation
  18. diversity_3 Community & Social Care
  19. construction Construction
  20. brush Creative & Digital
  21. currency_bitcoin Crypto & Blockchain
  22. support_agent Customer Service & Helpdesk
  23. medical_services Dental
  24. medical_services Driving & Transport
  25. medical_services E Commerce & Social Media
  26. school Education & Teaching
  27. electrical_services Electrical Engineering
  28. bolt Energy
  29. local_mall Fmcg
  30. gavel Government & Non Profit
  31. emoji_events Graduate
  32. health_and_safety Healthcare
  33. beach_access Hospitality & Tourism
  34. groups Human Resources
  35. precision_manufacturing Industrial Engineering
  36. security Information Security
  37. handyman Installation & Maintenance
  38. policy Insurance
  39. code IT & Software
  40. gavel Legal
  41. sports_soccer Leisure & Sports
  42. inventory_2 Logistics & Warehousing
  43. supervisor_account Management
  44. supervisor_account Management Consultancy
  45. supervisor_account Manufacturing & Production
  46. campaign Marketing
  47. build Mechanical Engineering
  48. perm_media Media & PR
  49. local_hospital Medical
  50. local_hospital Military & Public Safety
  51. local_hospital Mining
  52. medical_services Nursing
  53. local_gas_station Oil & Gas
  54. biotech Pharmaceutical
  55. checklist_rtl Project Management
  56. shopping_bag Purchasing
  57. home_work Real Estate
  58. person_search Recruitment Consultancy
  59. store Retail
  60. point_of_sale Sales
  61. science Scientific Research & Development
  62. wifi Telecoms
  63. psychology Therapy
  64. pets Veterinary
View All Data Engineer Jobs