23 AI Lab jobs in the United Arab Emirates
Data Scientist -UAE National , AWS Generative AI Innovation Center
Posted 8 days ago
Job Viewed
Job Description
Amazon launched the Generative AI Innovation Center (GenAIIC) in June 2023 to help AWS customers accelerate the use of generative AI to solve business and operational problems and promote innovation in their organization. This is a team of strategists, data scientists, engineers, and solution architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. ( generative-ai-innovation-center).
We're looking for Data Scientists to use generative AI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems.
You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience.
As an early-in-career joiner, you will initially join our A2C (Associate to Consultant) program for intensive training on AWS technology and delivery approach.
Emirati nationality is required.
Key job responsibilities
As a Data Scientist, you will
- Collaborate with AI/ML scientists, engineers, and architects to Research, design, develop, and evaluate cutting-edge generative AI algorithms to address real-world challenges
- Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production
- Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder
- Provide customer and market feedback to Product and Engineering teams to help define product direction
About the team
The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train or fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and make plans for launching solutions at scale. The Generative AI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn't followed a traditional path, or includes alternative experiences, don't let it stop you from applying.
Why AWS?
Amazon Web Services (AWS) is the world's most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating - that's why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture
Here at AWS, it's in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness.
Mentorship & Career Growth
We're continuously raising our performance bar as we strive to become Earth's Best Employer. That's why you'll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there's nothing we can't achieve in the cloud.
Basic Qualifications
- PhD or Master's degree or equivalent experience
- Experience building a range of AI/ML models
- Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing, neural deep learning methods and/or machine learning
Preferred Qualifications
Experience in using Python and hands on experience building models with deep learning frameworks like Tensorflow, Keras, PyTorch, MXNet
- Prior experience in training and fine-tuning of Large Language Models (LLMs)
- Knowledge of AWS platform and tools
Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.
Is this job a match or a miss?
Data & AI Lead - Innovation & Execution
Posted today
Job Viewed
Job Description
- Successful delivery of large-scale AI, Generative AI, and Agent AI platforms.
- Deliver groundbreaking AI solutions with data teams and tech vendors.
About Our Client
As a global leader in technology, that are at the forefront of innovation, consistently pushing the boundaries of what's possible with Artificial Intelligence.
Job Description
- Technical Strategy & Roadmap: Lead the technical definition and evolution of the AI, Generative AI, and Agent AI platform strategy, identifying key technical initiatives and architectural patterns required for scalable, robust, and innovative solutions.
- Large-Scale Platform Delivery Oversight: Provide deep technical oversight and guidance for the delivery of complex AI platforms, ensuring adherence to technical standards, performance metrics, and security requirements at an enterprise scale.
- Vendor & Partner Collaboration: Act as a primary technical liaison with leading AI technology vendors, evaluating new capabilities, driving strategic partnerships, and integrating third-party solutions into the internal AI ecosystem.
- Internal Team Enablement: Collaborate closely with in-house data science, machine learning engineering, and data engineering teams to understand technical challenges, unblock dependencies, and foster a culture of technical excellence and shared accountability for AI platform delivery.
- Innovation & Research Integration: Monitor cutting-edge AI research and industry trends (e.g., advanced LLM architectures, multi-modal AI, autonomous agents), identifying opportunities to integrate new capabilities into strategic AI initiatives and platforms.
The Successful Applicant
- Master's or PhD in Computer Science, Artificial Intelligence, Machine Learning, Data Science, or a related highly technical quantitative field.
- Minimum of 10+ years of progressive experience in technical roles within AI/ML engineering, data engineering, or a related field, with at least 4 years in a lead or strategic oversight position specifically focused on AI platforms.
- Deep technical expertise and hands-on experience in building and deploying large-scale AI systems, including familiarity with various AI paradigms (e.g., Supervised Learning, Reinforcement Learning, Generative AI, Agentic AI).
- Proven experience overseeing the delivery of Generative AI and Agent AI platforms, understanding their unique architectural requirements, data pipelines, and deployment challenges.
What's on Offer
This is an unparalleled opportunity for the successful candidate to shape the future of AI at a global tech leader.
#J-18808-LjbffrIs this job a match or a miss?
AI Software Development Consultant
Posted today
Job Viewed
Job Description
Overview
Design, develop, and deploy AI/ML models to solve business problems. Collaborate with cross-functional teams to identify opportunities for AI integration , translate client requirements into scalable AI-powered solutions , and provide consulting expertise on AI strategies, tools, and implementation. Stay up to date with the latest AI/ML frameworks, tools, and industry trends , ensuring solutions are robust, scalable, and aligned with business goals.
Responsibilities- Design, develop, and deploy AI/ML models to solve business problems.
- Collaborate with cross-functional teams to identify opportunities for AI integration .
- Translate client requirements into scalable AI-powered solutions .
- Provide consulting expertise on AI strategies, tools, and implementation.
- Stay up to date with the latest AI/ML frameworks, tools, and industry trends .
- Ensure solutions are robust, scalable, and aligned with business goals.
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field .
- Strong programming skills in Python, R, or similar languages .
- Experience with TensorFlow, PyTorch, Keras, or other AI frameworks .
- Knowledge of cloud platforms (AWS, Azure, GCP) for AI/ML solutions.
- Excellent problem-solving and consulting skills .
- Strong communication and ability to present AI solutions to non-technical stakeholders.
- Opportunity to work on cutting-edge AI projects .
- Collaborative and innovative work environment .
- Career growth and continuous learning opportunities .
- Be part of an international, forward-thinking team .
Is this job a match or a miss?
AI Research Engineer (Model Evaluation)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions enable seamless integration of reserve-backed tokens across blockchains, empowering businesses worldwide. Transparency and trust are at the core of everything we do.
Innovate with Tether
Tether Finance: Our product suite features the trusted stablecoin USDT and digital asset tokenization services.
Tether Power: We promote sustainable Bitcoin mining using eco-friendly practices.
Tether Data: We develop AI and P2P solutions like KEET for secure data sharing.
Tether Education: We democratize digital learning to empower individuals globally.
Tether Evolution: We push technological boundaries to merge innovation with human potential.
Why Join Us?
Our remote, global team is passionate about fintech innovation. If you have excellent English skills and want to contribute to a leading platform, Tether is your place.
Are you ready to be part of the future?
About the job:
As part of our AI model team, you will develop evaluation frameworks for AI models across their lifecycle, focusing on metrics like accuracy, latency, and robustness. You will work on models from resource-efficient to multi-modal architectures, collaborating with cross-functional teams to implement evaluation pipelines and dashboards, setting industry standards for AI model quality.
Responsibilities:
- Develop and deploy evaluation frameworks assessing models during pre-training, post-training, and inference, tracking KPIs such as accuracy, latency, and memory usage.
- Curate datasets and design benchmarks to measure model robustness and improvements.
- Collaborate with product, engineering, and operations teams to align evaluation metrics with business goals, presenting findings via dashboards and reports.
- Analyze evaluation data to identify bottlenecks, proposing optimizations for performance and resource efficiency.
- Conduct experiments to refine evaluation methodologies, staying updated with emerging techniques to enhance model reliability.
Minimum requirements:
- A degree in Computer Science or related field; PhD in NLP, Machine Learning, or similar is preferred, with a strong R&D record.
- Experience designing and evaluating AI models at various stages, proficient in evaluation frameworks assessing accuracy, convergence, and robustness.
- Strong programming skills and experience building scalable evaluation pipelines, familiar with performance metrics like latency, throughput, and memory footprint.
- Ability to conduct iterative experiments, staying current with new techniques to improve benchmarking practices.
- Experience working with cross-functional teams, translating technical insights into actionable recommendations.
Is this job a match or a miss?
AI Research Engineer (Fine-tuning)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
But that’s just the beginning:
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
As a member of the AI model team, you will drive innovation in supervised fine-tuning methodologies for advanced models. Your work will refine pre-trained models so that they deliver enhanced intelligence, optimized performance, and domain-specific capabilities designed for real-world challenges. You will work on a wide spectrum of systems, ranging from streamlined, resource-efficient models that run on limited hardware to complex multi-modal architectures that integrate data such as text, images, and audio.
We expect you to have deep expertise in large language model architectures and substantial experience in fine-tuning optimization. You will adopt a hands-on, research-driven approach to developing, testing, and implementing new fine-tuning techniques and algorithms. Your responsibilities include curating specialized data, strengthening baseline performance, and identifying as well as resolving bottlenecks in the fine-tuning process. The goal is to unlock superior domain-adapted AI performance and push the limits of what these models can achieve.
Responsibilities :
Develop and implement new state-of-the-art and novel fine-tuning methodologies for pre-trained models with clear performance targets.
Build, run, and monitor controlled fine-tuning experiments while tracking key performance indicators. Document iterative results and compare against benchmark datasets.
Identify and process high-quality datasets tailored to specific domains. Set measurable criteria to ensure that data curation positively impacts model performance in fine-tuning tasks.
Systematically debug and optimize the fine-tuning process by analyzing computational and model performance metrics.
Collaborate with cross-functional teams to deploy fine-tuned models into production pipelines. Define clear success metrics and ensure continuous monitoring for improvements and domain adaptation.
A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).
Hands-on experience with large-scale fine-tuning experiments, where your contributions have led to measurable improvements in domain-specific model performance.
Deep understanding of advanced fine-tuning methodologies, including state-of-the-art modifications for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows.
Strong expertise in PyTorch and Hugging Face libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms.
Demonstrated ability to apply empirical research to overcome fine-tuning bottlenecks. You should be comfortable designing evaluation frameworks and iterating on algorithmic improvements to continuously push the boundaries of fine-tuned AI performance.
Is this job a match or a miss?
AI Research Engineer (Pre-training)
Posted today
Job Viewed
Job Description
Overview
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
As a member of the AI model team, you will drive innovation in architecture development for cutting-edge models of various scales, including small, large, and multi-modal systems. Your work will enhance intelligence, improve efficiency, and introduce new capabilities to advance the field.
You will have a deep expertise in LLM architectures, a strong grasp of pre-training optimization with a hands-on, research-driven approach. Your mission is to explore and implement novel techniques and algorithms that lead to groundbreaking advancements: data curation, strengthening baselines, identifying and resolving existing pre-training bottlenecks to push the limits of AI performance.
ResponsibilitiesConduct pre-training AI models on large, distributed servers equipped with thousands of NVIDIA GPUs.
Design, prototype, and scale innovative architectures to enhance model intelligence.
Independently and collaboratively execute experiments, analyze results, and refine methodologies for optimal performance.
Investigate, debug, and improve both model efficiency and computational performance.
Contribute to the advancement of training systems to ensure seamless scalability and efficiency on target platforms.
A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).
Hands-on experience contributing to large-scale LLM training runs on large, distributed servers equipped with thousands of NVIDIA GPUs, ensuring scalability and impactful advancements in model performance.
Familiarity and practical experience with large-scale, distributed training frameworks, libraries and tools.
Deep knowledge of state-of-the-art transformer and non-transformer modifications aimed at enhancing intelligence, efficiency and scalability.
Strong expertise in PyTorch and Hugging Face libraries with practical experience in model development, continual pretraining, and deployment.
Is this job a match or a miss?
AI Research Engineer (Model Serving & Inference)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions enable seamless integration of reserve-backed tokens across blockchains, empowering businesses worldwide. Transparency and security are at the core of our mission to build trust in digital transactions.
Innovate with Tether
Tether Finance: Home of the trusted stablecoin USDT and innovative digital asset tokenization services.
Tether Power: Eco-friendly energy solutions for Bitcoin mining, utilizing sustainable practices across diverse locations.
Tether Data: Advancing AI and peer-to-peer tech with solutions like KEET for secure data sharing.
Tether Education: Providing accessible digital learning to empower individuals in the digital economy.
Tether Evolution: Pushing technological boundaries to merge human potential with innovation.
Why Join Us?
Our remote, global team is passionate about fintech innovation. Join us to work alongside top talent, influence industry standards, and grow with a fast-paced, industry-leading company.
If you excel in English communication and are eager to contribute to cutting-edge platforms, Tether is your next career move.
About the job:
As part of our AI model team, you will innovate in model serving and inference architectures for advanced AI systems, focusing on optimizing deployment strategies for responsiveness, efficiency, and scalability across various hardware environments, including resource-constrained devices and complex multi-modal systems.
Your role involves developing, testing, and deploying robust inference pipelines, establishing performance metrics, and troubleshooting bottlenecks to ensure high-throughput, low-latency AI performance in real-world applications.
Responsibilities :
- Design and deploy high-performance model serving architectures optimized for diverse environments, including edge devices.
- Set and track performance targets such as latency reduction, token response improvements, and memory minimization.
- Conduct inference testing in simulated and live environments, monitoring key metrics and documenting results for continuous improvement.
- Prepare high-quality datasets and scenarios for real-world deployment challenges, especially on low-resource devices.
- Analyze and address bottlenecks in serving pipelines to enhance scalability and reliability.
- Collaborate with teams to integrate optimized inference frameworks into production, ensuring continuous monitoring and improvement.
Minimum requirements:
- Degree in Computer Science or related field; PhD preferred, with a strong record in AI R&D.
- Proven experience in kernel and inference optimization on mobile devices, demonstrating measurable performance improvements.
- Deep understanding of modern model serving architectures and optimization techniques for resource-constrained environments.
- Expertise in CPU and GPU kernel development for mobile platforms and experience deploying inference pipelines.
- Ability to translate empirical research into practical optimizations, with skills in evaluation frameworks and iterative improvement.
Is this job a match or a miss?
Be The First To Know
About the latest Ai lab Jobs in United Arab Emirates !
Senior AI Research Engineer, Model Inference (Remote)
Posted today
Job Viewed
Job Description
Overview
Join Tether and shape the future of digital finance. At Tether, we’re pioneering a global financial revolution. Our solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing.
Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution: At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine-tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine-tuning for language models with a strong focus on mobile and integrated GPU acceleration (Vulkan).
This role requires hands-on experience with quantization techniques, LoRA architectures, Vulkan backend, and mobile GPU debugging. You will play a critical role in pushing the boundaries of desktop and on-device inference and fine-tuning performance for next-generation SLM/LLMs.
Responsibilities- Implement and optimize custom inference and fine-tuning kernels for small and large language models across multiple hardware backends.
- Implement and optimize full and LoRA fine-tuning for small and large language models across multiple hardware backends.
- Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).
- Design, customize, and optimize Vulkan compute shaders for quantized operators and fine-tuning workflows.
- Investigate and resolve GPU acceleration issues on Vulkan and integrated/mobile GPUs.
- Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.
- Debug and optimize GPU operators (e.g., int8, fp16, fp4, ternary).
- Integrate and validate quantization workflows for training and inference.
- Conduct evaluation and benchmarking (e.g., perplexity testing, fine-tuned adapter performance).
- Conduct GPU testing across desktop and mobile devices.
- Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.
- Deliver production-grade, efficient language model deployment for mobile and edge use cases.
- Work closely with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, optimal memory usage and ensure continuous monitoring and iterative refinements for sustained improvements.
- Proficiency in C++ and GPU kernel programming.
- Proven Expertise in GPU acceleration with Vulkan framework.
- Strong background in quantization and mixed-precision model optimization.
- Experience and Expertise in Vulkan compute shader development and customization.
- Familiarity with LoRA fine-tuning and parameter-efficient training methods.
- Ability to debug GPU-specific performance and stability issues on desktop and mobile devices.
- Hands-on experience with mobile GPU acceleration and model inference.
- Familiarity with large language model architectures (e.g., Qwen, Gemma, LLaMA, Falcon etc.).
- Experience implementing custom backward operators for fine-tuning.
- Experience creating and curating custom datasets for style transfer and domain-specific fine-tuning.
- Demonstrated ability to apply empirical research to overcome challenges in model development.
Important information for candidates
Recruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles:
Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page.
Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website.
Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms.
Double-check email addresses. All communication from us will come from emails ending with tether.to or tether.io.
We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately.
When in doubt, feel free to reach out through our official website.
#J-18808-LjbffrIs this job a match or a miss?
Senior AI Research Engineer, Model Inference (Remote)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
But that’s just the beginning:
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine-tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine-tuning for Language models with a strong focus on mobile and integrated GPU acceleration (Vulkan).
This role requires hands-on experience with quantization techniques, LoRA architectures, Vulkan backend, and mobile GPU debugging. You will play a critical role in pushing the boundaries of desktop and on-device inference and fine-tuning performance for next-generation SLM/LLMs.
Responsibilities:- Implement and optimize custom inference and fine-tuning kernels for small and large language models across multiple hardware backends.
- Implement and optimize full and LoRA fine-tuning for small and large language models across multiple hardware backends.
- Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).
- Design, customize, and optimize Vulkan compute shaders for quantized operators and fine-tuning workflows.
- Investigate and resolve GPU acceleration issues on Vulkan and integrated/mobile GPUs.
- Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.
- Debug and optimize GPU operators (e.g., int8, fp16, fp4, ternary).
- Integrate and validate quantization workflows for training and inference.
- Conduct evaluation and benchmarking (e.g., perplexity testing, fine-tuned adapter performance).
- Conduct GPU testing across desktop and mobile devices.
- Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.
- Deliver production-grade, efficient language model deployment for mobile and edge use cases.
- Work closely with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, optimal memory usage and ensure continuous monitoring and iterative refinements for sustained improvements.
- Proficiency in C++ and GPU kernel programming.
- Proven Expertise in GPU acceleration with Vulkan framework.
- Strong background in quantization and mixed-precision model optimization.
- Experience and Expertise in Vulkan compute shader development and customization.
- Familiarity with LoRA fine-tuning and parameter-efficient training methods.
- Ability to debug GPU-specific performance and stability issues on desktop and mobile devices.
- Hands-on experience with mobile GPU acceleration and model inference.
- Familiarity with large language model architectures (e.g., Qwen, Gemma, LLaMA, Falcon etc.).
- Experience implementing custom backward operators for fine-tuning.
- Experience creating and curating custom datasets for style transfer and domain-specific fine-tuning.
- Demonstrated ability to apply empirical research to overcome challenges in model
Recruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles:
- Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page.
- Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website.
- Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms.
- Double-check email addresses. All communication from us will come from emails ending in @tether.to or @tether.io
- We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately.
When in doubt, feel free to reach out through our official website.
#J-18808-LjbffrIs this job a match or a miss?
AI Module Development Engineer - IoT & Digital Platforms
Posted today
Job Viewed
Job Description
Job Purpose:
To design, build, and integrate AI modules into IoT/IIoT and digital platforms by managing the full machine learning lifecycle — from model training to deployment — ensuring scalable and business-aligned solutions.
Key Responsibilities:- AI Module Development & Integration – Design and build AI modules that integrate seamlessly into IoT/IIoT and digital platforms.
- Machine Learning Model Lifecycle – Develop, train, validate, and maintain ML models for predictive maintenance, anomaly detection, optimization, and risk scoring.
- System & Platform Integration – Work with APIs, data pipelines, and dashboards to integrate AI solutions into enterprise and industrial systems.
- Model Training – Prepare data, select algorithms, and train models with high accuracy and scalability.
- Production Deployment – Deploy, monitor, and optimize ML models in production environments (cloud and edge).
- Collaborate with cross-functional product and engineering teams to ensure AI solutions meet business requirements.
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field.
- Strong expertise in AI/ML frameworks (TensorFlow, PyTorch, Scikit-Learn).
- Proven hands-on experience in ML model training and production deployment .
- Solid understanding of IoT/IIoT platforms (AWS IoT, Azure IoT, or custom).
- Experience in data engineering & API integration .
- Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
- End-to-end ownership of ML model lifecycle.
- Strong analytical and problem-solving skills.
- Ability to bridge technical solutions with business impact.
Is this job a match or a miss?