87 Data Architect jobs in Dubai
Big Data Architect
Posted today
Job Viewed
Job Description
Job Title: Senior Data Engineer - Big Data/ Hadoop Ecosystem
Role OverviewAs a seasoned Senior Data Engineer , you will lead cutting-edge data initiatives in the banking sector, leveraging expertise in the Hadoop ecosystem to architect, build, and optimize large-scale data systems. Your mentorship skills will be invaluable as you guide a team of data engineers.
This role is perfect for individuals who thrive in collaborative environments and are passionate about delivering robust data solutions that drive business growth.
Main Responsibilities- Design, develop, and optimize scalable data processing systems using the Hadoop ecosystem (HDFS, MapReduce, Hive, Pig, HBase, Flume, Sqoop) and other Big Data technologies such as Java, MapReduce, Hive, and Spark.
- Lead and mentor a team of data engineers to ensure timely and high-quality project delivery, fostering a culture of innovation and excellence.
- Engineer, tune, and maintain complex data pipelines in Java, MapReduce, Hive, and Spark, including implementing stream-processing with Spark-Streaming.
- Design and build efficient dimensional data models and scalable architectures to empower analytics and business intelligence, driving informed decision-making across the organization.
- Oversee data integrity analysis, deployment, validation, and auditing of data models for accuracy and operational excellence, ensuring the highest standards of data quality.
- Leverage advanced SQL skills for performance tuning and optimization of data jobs, maximizing system efficiency and throughput.
- Collaborate with business intelligence teams to deliver industry-leading dashboards and data products that meet the evolving needs of stakeholders.
- 10+ years of hands-on experience as a Big Data Engineer, with deep technical expertise in the Hadoop ecosystem (Cloudera preferred), Apache Spark, and distributed data frameworks.
- Proven experience leading backend/distributed data systems teams while remaining technically hands-on, driving results-oriented projects that exceed expectations.
- Advanced proficiency in Java for MapReduce development, as well as strong skills in Python and/or Scala, ensuring adaptability and flexibility in a rapidly changing environment.
- Expertise in Big Data querying tools including Hive, Pig, and Impala, providing a competitive edge in data analysis and insights.
- Strong experience with both relational (Postgres) and NoSQL databases (Cassandra, HBase), enabling seamless data integration and management.
- Solid understanding of dimensional data modeling and data warehousing principles, driving data-driven decision-making and strategic business growth.
- Proficient in Linux/Unix systems and shell scripting, ensuring effective system administration and maintenance.
- Experience with Azure cloud services (Azure Data Lake, Databricks, HDInsight), facilitating cloud-based data management and scalability.
- Knowledge of stream-processing frameworks such as Spark-Streaming or Storm, enabling real-time data processing and analytics.
- Background in Financial Services or Banking industry, with exposure to data science and machine learning tools, enhancing data-driven business strategies.
Big Data Architect
Posted today
Job Viewed
Job Description
Total of 15 years of experience in architecture design of systems and 10 years of experience in architecture design of Big Data Platforms with the following skills:
- Proven experience in designing Big Data Platforms for both Infrastructure and Software Implementation.
- Good understanding of architecture Design Stage .
Responsibilities include:
- Preparing detailed deployment architecture documents of the Enterprise Big Data Platform (redesign of existing architecture).
- Supporting CART review of the solution architecture of the Enterprise Big Data Platform.
- Coordinating with Big Data Platform Technical specifications related to the Enterprise Big Data Platform infrastructure software stack, etc.
Chief Big Data Architect
Posted today
Job Viewed
Job Description
With extensive experience in crafting large-scale architecture designs and 10 years in Big Data Platforms, our ideal candidate will possess key skills including:
- Proven expertise in designing scalable Big Data Platforms for Infrastructure and Software Implementation.
- A deep understanding of the design process.
The role encompasses:
- Creating detailed deployment architecture documents for the Enterprise Big Data Platform.
- Supporting solution architecture reviews for the Enterprise Big Data Platform.
- Coordinating technical specifications for the Big Data Platform infrastructure software stack.
Senior Big Data Architect
Posted today
Job Viewed
Job Description
We are seeking a skilled professional to lead the development and maintenance of scalable data pipelines that ensure high data quality and availability across our organization. This role requires expertise in big data ecosystems, cloud-native tools, and advanced data processing techniques.
The ideal candidate will have hands-on experience with data ingestion, transformation, and optimization on distributed computing platforms, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights.
Key Responsibilities:
- Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy.
- Data Ingestion: Implement and manage data ingestion processes from various sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP.
- Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements.
- Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes.
- Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline.
- Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem.
- Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes.
- Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives.
- Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations.
Qualifications and Skills:
- Bachelor's or Master's degree in Computer Science, Data Engineering, Information Systems, or a related field.
- 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform.
- Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques.
- Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase.
- Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala).
- Familiarity with Hadoop, Kafka, and other distributed computing tools.
- Experience with Apache Oozie, Airflow, or similar orchestration frameworks.
- Strong scripting skills in Linux.
Soft Skills:
- Strong analytical and problem-solving skills.
- Excellent verbal and written communication abilities.
- Ability to work independently and collaboratively in a team environment.
- Attention to detail and commitment to data quality.
Data Architect
Posted today
Job Viewed
Job Description
Position Overview
The Data Architect is responsible for designing, implementing, and maintaining an organization's data architecture and strategy, ensuring that data is collected, stored, and processed efficiently and securely to support business intelligence, data analytics, and machine learning operations (MLOps) practices.
Key Responsibilities
- Designing Data Architecture: Plan and implement a robust, scalable data architecture that integrates data from various sources and supports diverse analytical needs, while optimising costs and meeting business requirements.
- Implementing Data Engineering Pipelines: Design and develop data pipelines for data extraction, transformation, and loading (ETL) processes, ensuring data quality and consistency.
- Enabling Data Intelligence and Analytics: Build and maintain data warehouses, data marts, and data lakes to support business intelligence and data analytics initiatives.
- Supporting MLOps Practices: Collaborate with data scientists and machine learning engineers to design and implement data infrastructure and processes that support machine learning model development, deployment, and maintenance.
- Ensuring Data Security and Compliance: Implement security measures, policies, and procedures to safeguard data privacy and comply with relevant regulations.
- Data Governance and Management: Establish and enforce data governance policies and standards to ensure data quality, integrity, and accessibility.
- Collaborating with Cross-Functional Teams: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and translate them into technical solutions.
- Staying Abreast of Technological Advancements: Keep up-to-date with emerging technologies and trends in data architecture, data engineering, and MLOps to identify opportunities for improvement and innovation.
- Optimising Data Performance: Monitor and analyze data processing performance, identify bottlenecks, and implement optimisations to enhance efficiency and scalability.
- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation of data architecture, models, and processing workflows.
Technical Requirements
- Extensive experience in data architecture design and implementation.
- Hands-on experience with Amazon Redshift for data warehousing, query optimization, and integration with AWS analytics services.
- Strong knowledge of data engineering principles and practices.
- Expertise in data warehousing, data modelling, and data integration.
- Experience in MLOps and machine learning pipelines.
- Proficiency in SQL and data manipulation languages.
- Experience with big data platforms (including PowerBI, DBT, Databricks, MySQL, Postgres, Clickhouse, Kafka, Redis etc) and cloud-based infrastructure on AWS.
Education & Qualifications
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field, or equivalent experience.
- Preferred certifications (optional):
- AWS Machine Learning Ops Engineer
Seniority level
- Mid-Senior level
Employment type
- Full-time
Job function
- Engineering and Information Technology
- Industries: Software Development, Data Infrastructure and Analytics, and IT System Custom Software Development
#J-18808-Ljbffr
Data Architect
Posted today
Job Viewed
Job Description
Responsible for designing, implementing, and managing the organization's data architecture to ensure data availability, accuracy, security, and scalability.
Key Responsibilities- Design and optimize data models, databases, and pipelines.
- Define and enforce data governance, standards, and best practices.
- Collaborate with business and technical teams to translate requirements into scalable data solutions.
- Ensure data quality, integrity, and security across systems.
- Evaluate and implement data tools, platforms, and cloud solutions (AWS, Azure, GCP).
- Strong knowledge of data modeling, ETL/ELT, and data warehousing.
- Proficiency in SQL, NoSQL, and big data technologies.
- Experience with cloud platforms and modern data architectures (e.g., lakehouse, streaming).
- Understanding of data governance, compliance, and security.
- Not Applicable
- Contract
- Engineering and Information Technology
- IT Services and IT Consulting
#J-18808-Ljbffr
Data Architect
Posted today
Job Viewed
Job Description
Senior Data Scientist Position
This role is ideal for a highly skilled data scientist with expertise in designing and implementing production-ready algorithms and models to manage risk and drive business growth.
Key Responsibilities:- Develop predictive and forecasting models for risk management and business growth.
- Full model cycle from experimentation to deployment, including model calibration and post-modeling adjustments.
- Regional and International Asset Risk Monitoring.
- Calculate RV Provisions, identify hand-backs, and quantify feature impact.
- Conduct statistical and quantitative analyses to identify potential risks.
- Design and develop risk measures, methodologies, and models.
- Define pull-forward or backward strategy to optimize trade cycles.
- Quantify feature importance in Time-to-fail and Mileage-to-fail.
- 5+ years of experience in financial services, specializing in data analytics and business intelligence.
- Strong knowledge of statistics, Python, and SQL for data modeling, automation, and reporting.
- Proven track record in delivery of analytics use cases such as credit risk modeling, fraud detection, customer segmentation, and compliance reporting.
- 5+ years of hands-on experience in Data Science (Machine Learning, NLP, Neural Nets, Time-Series Forecasting, etc.). Supervised and un-supervised learning methods for inference on data patterns.
You will be working under the guidance of an experienced Head of Data Science & Asset Risk.
Be The First To Know
About the latest Data architect Jobs in Dubai !
Data Architect
Posted today
Job Viewed
Job Description
Data Architect
Job Description:
We are seeking a skilled Data Architect to join our team. As a Data Architect, you will play a critical role in designing and maintaining our data architectures.
This includes working with databases and processing systems to ensure efficient data extraction and transformation.
You will be responsible for designing and building data pipelines, acquiring and integrating new data sources, and optimizing data quality and reliability.
Required Skills and Qualifications:- Degree qualified in data science or related field
- Extensive experience with SQL, Python, and R programming languages
- Experience with Airflow, Spark, Hadoop, and PostgreSQL
We offer flexible and collaborative solutions to any organization that requires IT experts. Our agile approach focuses on essential skills and abilities. We encourage applications from a diverse array of backgrounds.
Senior Data Architect
Posted today
Job Viewed
Job Description
Education
- Master or Bachelor’s degree in computer science, information systems management or related field.
- Solution and Architecture certifications such as TOGAF or other.
- More than 5+ years of experience in information technology, with 2+ years spent in data architecture or technology solutions definitions and implementations.
- Extensive experience in banking and financial services domain
- Expertise in Data Architecture, Data Strategy and Roadmap for large and complex organization and systems and implemented large scale end-to-end Data Management & Analytics solutions
- Experience in transforming traditional Data Warehousing approaches to Big Data based approaches and proven track record of managing risks and data security
- Expertise with DW Dimensional modeling techniques, Star & Snowflake schemas, modeling slowly changing dimensions and role playing dimensions, dimensional hierarchies, and data classification
- Experiences in cloud native principals, designs and deployments.
- Extensive experience working with and enhancing Continuous Integration (CI) and Continuous Development (CD) environments
- Expertise in Data Quality, Data Profiling, Data Governance, Data Security, Metadata Management, and Data Archival
- Define workload migration strategies using appropriate tools
- Drive delivery in a matrixed environment working with various internal IT partners
- Demonstrated ability to work in a fast paced and changing environment with short deadlines, interruptions, and multiple tasks/projects occurring simultaneously
- Must be able to work independently and have skills in planning, strategy, estimation, scheduling
- Strong problem solving, influencing, communication, and presentation skills, self-starter
- Experience with data processing frameworks and platforms (Informatica, Hadoop, Presto, Tez, Hive, Spark etc.)
- Hands-on experience with related/complementary open source software platforms and languages (e.g. Java, Linux, Python, GIT, Jenkins)
- Exposure to BI tools and reporting software (e.g. MS PowerBI and Tableau)
Sr. Data Architect
Posted today
Job Viewed
Job Description
Job Description
Role Summary
As a Senior Analytical Engineer, you will be a core member of the data architect team, responsible for transforming raw data into high-quality, actionable insights. Your mission is to build, own, and maintain the data products that power our analytics, reporting, and business decisions. You will work across the entire data lifecycle, from data ingestion to delivering final insights, ensuring data quality and governance are central to everything you build.
This role requires a unique blend of technical expertise in data modeling, data pipelines, data quality, a strong business acumen to understand complex problems, and exceptional communication skills to collaborate with diverse teams. You will act as the key link between our data science, engineering, product, and the business, driving value by making data trustworthy and accessible.
What's On Your Plate?
As an Analytical Engineer and Data Architect, you will be responsible for :
1. Data Product Development & Ownership
Build and Maintain : Design, build, and maintain enterprise data warehouses and analytical data marts.
Translate Needs : Work closely with data scientists, product managers, and business teams to understand their problems and translate complex business requirements into robust data models and data products.
End-to-End Responsibility : Take full ownership of data products from concept to production, ensuring they are scalable, reliable, and well-documented.
2. Data Quality & Governance
Enhance Data Quality : Implement and enforce data quality and governance standards throughout the data pipeline, moving checks and enrichment upstream.
Root Cause Analysis : Proactively identify data quality and governance shortfalls, perform root cause analysis, and implement solutions with monitoring mechanisms.
3. Cross-Functional Collaboration
Collaborate : Partner with data engineers to ensure smooth data ingestion from source systems and with backend engineers to define and implement data contracts.
Consult : Serve as a subject matter expert for analytical and business teams, guiding them on best practices for data consumption and model design.
4. Data Pipeline & Modeling Expertise
Pipeline Management : Design and optimize data pipelines to support data transformation, data structures, metadata, and dependency management.
Modern Modeling : Apply a variety of data modeling techniques, including modern methods, to build efficient and flexible data solutions.
Qualifications
What Did We Order?
Experience : 5+ years of experience in data management, with a strong focus on analytical engineering.
Technical Skills : Advanced SQL knowledge and experience with relational and non-relational databases. Proficient in building and optimizing data pipelines and analytical data models.
Data Modeling : Solid experience with various data modeling techniques, including modern methodologies.
Center of Excellence : Build best practices and processes supporting data transformation, data structures, metadata, dependency, and workload management.
Problem-Solving : A strong problem-solver with a "figure it out" growth mindset and a "keep it simple" approach.
Collaboration : Proven ability to work effectively with cross-functional teams in a dynamic environment. Excellent collaborator and communicator who can translate business needs into technical solutions.
Ownership : A strong sense of ownership and accountability for data products and their impact.
Education : Bachelor's degree in engineering, computer science, technology, or a similar field.
#J-18808-Ljbffr