23 AI Research jobs in the United Arab Emirates
AI Research Engineer (Model Evaluation)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions enable seamless integration of reserve-backed tokens across blockchains, empowering businesses worldwide. Transparency and trust are at the core of everything we do.
Innovate with Tether
Tether Finance: Our product suite features the trusted stablecoin USDT and digital asset tokenization services.
Tether Power: We promote sustainable Bitcoin mining using eco-friendly practices.
Tether Data: We develop AI and P2P solutions like KEET for secure data sharing.
Tether Education: We democratize digital learning to empower individuals globally.
Tether Evolution: We push technological boundaries to merge innovation with human potential.
Why Join Us?
Our remote, global team is passionate about fintech innovation. If you have excellent English skills and want to contribute to a leading platform, Tether is your place.
Are you ready to be part of the future?
About the job:
As part of our AI model team, you will develop evaluation frameworks for AI models across their lifecycle, focusing on metrics like accuracy, latency, and robustness. You will work on models from resource-efficient to multi-modal architectures, collaborating with cross-functional teams to implement evaluation pipelines and dashboards, setting industry standards for AI model quality.
Responsibilities:
- Develop and deploy evaluation frameworks assessing models during pre-training, post-training, and inference, tracking KPIs such as accuracy, latency, and memory usage.
- Curate datasets and design benchmarks to measure model robustness and improvements.
- Collaborate with product, engineering, and operations teams to align evaluation metrics with business goals, presenting findings via dashboards and reports.
- Analyze evaluation data to identify bottlenecks, proposing optimizations for performance and resource efficiency.
- Conduct experiments to refine evaluation methodologies, staying updated with emerging techniques to enhance model reliability.
Minimum requirements:
- A degree in Computer Science or related field; PhD in NLP, Machine Learning, or similar is preferred, with a strong R&D record.
- Experience designing and evaluating AI models at various stages, proficient in evaluation frameworks assessing accuracy, convergence, and robustness.
- Strong programming skills and experience building scalable evaluation pipelines, familiar with performance metrics like latency, throughput, and memory footprint.
- Ability to conduct iterative experiments, staying current with new techniques to improve benchmarking practices.
- Experience working with cross-functional teams, translating technical insights into actionable recommendations.
AI Research Engineer (Fine-tuning)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
But that’s just the beginning:
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
As a member of the AI model team, you will drive innovation in supervised fine-tuning methodologies for advanced models. Your work will refine pre-trained models so that they deliver enhanced intelligence, optimized performance, and domain-specific capabilities designed for real-world challenges. You will work on a wide spectrum of systems, ranging from streamlined, resource-efficient models that run on limited hardware to complex multi-modal architectures that integrate data such as text, images, and audio.
We expect you to have deep expertise in large language model architectures and substantial experience in fine-tuning optimization. You will adopt a hands-on, research-driven approach to developing, testing, and implementing new fine-tuning techniques and algorithms. Your responsibilities include curating specialized data, strengthening baseline performance, and identifying as well as resolving bottlenecks in the fine-tuning process. The goal is to unlock superior domain-adapted AI performance and push the limits of what these models can achieve.
Responsibilities :
Develop and implement new state-of-the-art and novel fine-tuning methodologies for pre-trained models with clear performance targets.
Build, run, and monitor controlled fine-tuning experiments while tracking key performance indicators. Document iterative results and compare against benchmark datasets.
Identify and process high-quality datasets tailored to specific domains. Set measurable criteria to ensure that data curation positively impacts model performance in fine-tuning tasks.
Systematically debug and optimize the fine-tuning process by analyzing computational and model performance metrics.
Collaborate with cross-functional teams to deploy fine-tuned models into production pipelines. Define clear success metrics and ensure continuous monitoring for improvements and domain adaptation.
A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).
Hands-on experience with large-scale fine-tuning experiments, where your contributions have led to measurable improvements in domain-specific model performance.
Deep understanding of advanced fine-tuning methodologies, including state-of-the-art modifications for transformer architectures as well as alternative approaches. Your expertise should emphasize techniques that enhance model intelligence, efficiency, and scalability within fine-tuning workflows.
Strong expertise in PyTorch and Hugging Face libraries with practical experience in developing fine-tuning pipelines, continuously adapting models to new data, and deploying these refined models in production on target platforms.
Demonstrated ability to apply empirical research to overcome fine-tuning bottlenecks. You should be comfortable designing evaluation frameworks and iterating on algorithmic improvements to continuously push the boundaries of fine-tuned AI performance.
AI Research Engineer (Pre-training)
Posted today
Job Viewed
Job Description
Overview
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
As a member of the AI model team, you will drive innovation in architecture development for cutting-edge models of various scales, including small, large, and multi-modal systems. Your work will enhance intelligence, improve efficiency, and introduce new capabilities to advance the field.
You will have a deep expertise in LLM architectures, a strong grasp of pre-training optimization with a hands-on, research-driven approach. Your mission is to explore and implement novel techniques and algorithms that lead to groundbreaking advancements: data curation, strengthening baselines, identifying and resolving existing pre-training bottlenecks to push the limits of AI performance.
ResponsibilitiesConduct pre-training AI models on large, distributed servers equipped with thousands of NVIDIA GPUs.
Design, prototype, and scale innovative architectures to enhance model intelligence.
Independently and collaboratively execute experiments, analyze results, and refine methodologies for optimal performance.
Investigate, debug, and improve both model efficiency and computational performance.
Contribute to the advancement of training systems to ensure seamless scalability and efficiency on target platforms.
A degree in Computer Science or related field. Ideally PhD in NLP, Machine Learning, or a related field, complemented by a solid track record in AI R&D (with good publications in A* conferences).
Hands-on experience contributing to large-scale LLM training runs on large, distributed servers equipped with thousands of NVIDIA GPUs, ensuring scalability and impactful advancements in model performance.
Familiarity and practical experience with large-scale, distributed training frameworks, libraries and tools.
Deep knowledge of state-of-the-art transformer and non-transformer modifications aimed at enhancing intelligence, efficiency and scalability.
Strong expertise in PyTorch and Hugging Face libraries with practical experience in model development, continual pretraining, and deployment.
AI Research Engineer (Model Serving & Inference)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our solutions enable seamless integration of reserve-backed tokens across blockchains, empowering businesses worldwide. Transparency and security are at the core of our mission to build trust in digital transactions.
Innovate with Tether
Tether Finance: Home of the trusted stablecoin USDT and innovative digital asset tokenization services.
Tether Power: Eco-friendly energy solutions for Bitcoin mining, utilizing sustainable practices across diverse locations.
Tether Data: Advancing AI and peer-to-peer tech with solutions like KEET for secure data sharing.
Tether Education: Providing accessible digital learning to empower individuals in the digital economy.
Tether Evolution: Pushing technological boundaries to merge human potential with innovation.
Why Join Us?
Our remote, global team is passionate about fintech innovation. Join us to work alongside top talent, influence industry standards, and grow with a fast-paced, industry-leading company.
If you excel in English communication and are eager to contribute to cutting-edge platforms, Tether is your next career move.
About the job:
As part of our AI model team, you will innovate in model serving and inference architectures for advanced AI systems, focusing on optimizing deployment strategies for responsiveness, efficiency, and scalability across various hardware environments, including resource-constrained devices and complex multi-modal systems.
Your role involves developing, testing, and deploying robust inference pipelines, establishing performance metrics, and troubleshooting bottlenecks to ensure high-throughput, low-latency AI performance in real-world applications.
Responsibilities :
- Design and deploy high-performance model serving architectures optimized for diverse environments, including edge devices.
- Set and track performance targets such as latency reduction, token response improvements, and memory minimization.
- Conduct inference testing in simulated and live environments, monitoring key metrics and documenting results for continuous improvement.
- Prepare high-quality datasets and scenarios for real-world deployment challenges, especially on low-resource devices.
- Analyze and address bottlenecks in serving pipelines to enhance scalability and reliability.
- Collaborate with teams to integrate optimized inference frameworks into production, ensuring continuous monitoring and improvement.
Minimum requirements:
- Degree in Computer Science or related field; PhD preferred, with a strong record in AI R&D.
- Proven experience in kernel and inference optimization on mobile devices, demonstrating measurable performance improvements.
- Deep understanding of modern model serving architectures and optimization techniques for resource-constrained environments.
- Expertise in CPU and GPU kernel development for mobile platforms and experience deploying inference pipelines.
- Ability to translate empirical research into practical optimizations, with skills in evaluation frameworks and iterative improvement.
Senior AI Research Engineer, Model Inference (Remote)
Posted today
Job Viewed
Job Description
Overview
Join Tether and shape the future of digital finance. At Tether, we’re pioneering a global financial revolution. Our solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing.
Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution: At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:
We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine-tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine-tuning for language models with a strong focus on mobile and integrated GPU acceleration (Vulkan).
This role requires hands-on experience with quantization techniques, LoRA architectures, Vulkan backend, and mobile GPU debugging. You will play a critical role in pushing the boundaries of desktop and on-device inference and fine-tuning performance for next-generation SLM/LLMs.
Responsibilities- Implement and optimize custom inference and fine-tuning kernels for small and large language models across multiple hardware backends.
- Implement and optimize full and LoRA fine-tuning for small and large language models across multiple hardware backends.
- Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).
- Design, customize, and optimize Vulkan compute shaders for quantized operators and fine-tuning workflows.
- Investigate and resolve GPU acceleration issues on Vulkan and integrated/mobile GPUs.
- Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.
- Debug and optimize GPU operators (e.g., int8, fp16, fp4, ternary).
- Integrate and validate quantization workflows for training and inference.
- Conduct evaluation and benchmarking (e.g., perplexity testing, fine-tuned adapter performance).
- Conduct GPU testing across desktop and mobile devices.
- Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.
- Deliver production-grade, efficient language model deployment for mobile and edge use cases.
- Work closely with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, optimal memory usage and ensure continuous monitoring and iterative refinements for sustained improvements.
- Proficiency in C++ and GPU kernel programming.
- Proven Expertise in GPU acceleration with Vulkan framework.
- Strong background in quantization and mixed-precision model optimization.
- Experience and Expertise in Vulkan compute shader development and customization.
- Familiarity with LoRA fine-tuning and parameter-efficient training methods.
- Ability to debug GPU-specific performance and stability issues on desktop and mobile devices.
- Hands-on experience with mobile GPU acceleration and model inference.
- Familiarity with large language model architectures (e.g., Qwen, Gemma, LLaMA, Falcon etc.).
- Experience implementing custom backward operators for fine-tuning.
- Experience creating and curating custom datasets for style transfer and domain-specific fine-tuning.
- Demonstrated ability to apply empirical research to overcome challenges in model development.
Important information for candidates
Recruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles:
Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page.
Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website.
Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms.
Double-check email addresses. All communication from us will come from emails ending with tether.to or tether.io.
We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately.
When in doubt, feel free to reach out through our official website.
#J-18808-LjbffrSenior AI Research Engineer, Model Inference (Remote)
Posted today
Job Viewed
Job Description
Join Tether and Shape the Future of Digital Finance
At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.
Innovate with Tether
Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.
But that’s just the beginning:
Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.
Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing.
Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.
Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.
Why Join Us?
Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.
If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.
Are you ready to be part of the future?
About the job:We are looking for an experienced AI Model Engineer with deep expertise in kernel development, model optimization, fine-tuning, and GPU acceleration. The engineer will extend the inference framework to support inference and fine-tuning for Language models with a strong focus on mobile and integrated GPU acceleration (Vulkan).
This role requires hands-on experience with quantization techniques, LoRA architectures, Vulkan backend, and mobile GPU debugging. You will play a critical role in pushing the boundaries of desktop and on-device inference and fine-tuning performance for next-generation SLM/LLMs.
Responsibilities:- Implement and optimize custom inference and fine-tuning kernels for small and large language models across multiple hardware backends.
- Implement and optimize full and LoRA fine-tuning for small and large language models across multiple hardware backends.
- Design and extend datatype and precision support (int, float, mixed precision, ternary QTypes, etc.).
- Design, customize, and optimize Vulkan compute shaders for quantized operators and fine-tuning workflows.
- Investigate and resolve GPU acceleration issues on Vulkan and integrated/mobile GPUs.
- Architect and prepare support for advanced quantization techniques to improve efficiency and memory usage.
- Debug and optimize GPU operators (e.g., int8, fp16, fp4, ternary).
- Integrate and validate quantization workflows for training and inference.
- Conduct evaluation and benchmarking (e.g., perplexity testing, fine-tuned adapter performance).
- Conduct GPU testing across desktop and mobile devices.
- Collaborate with research and engineering teams to prototype, benchmark, and scale new model optimization methods.
- Deliver production-grade, efficient language model deployment for mobile and edge use cases.
- Work closely with cross-functional teams to integrate optimized serving and inference frameworks into production pipelines designed for edge and on-device applications. Define clear success metrics such as improved real-world performance, low error rates, robust scalability, optimal memory usage and ensure continuous monitoring and iterative refinements for sustained improvements.
- Proficiency in C++ and GPU kernel programming.
- Proven Expertise in GPU acceleration with Vulkan framework.
- Strong background in quantization and mixed-precision model optimization.
- Experience and Expertise in Vulkan compute shader development and customization.
- Familiarity with LoRA fine-tuning and parameter-efficient training methods.
- Ability to debug GPU-specific performance and stability issues on desktop and mobile devices.
- Hands-on experience with mobile GPU acceleration and model inference.
- Familiarity with large language model architectures (e.g., Qwen, Gemma, LLaMA, Falcon etc.).
- Experience implementing custom backward operators for fine-tuning.
- Experience creating and curating custom datasets for style transfer and domain-specific fine-tuning.
- Demonstrated ability to apply empirical research to overcome challenges in model
Recruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles:
- Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page.
- Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website.
- Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms.
- Double-check email addresses. All communication from us will come from emails ending in @tether.to or @tether.io
- We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately.
When in doubt, feel free to reach out through our official website.
#J-18808-LjbffrAI Software Development Consultant
Posted today
Job Viewed
Job Description
Overview
Design, develop, and deploy AI/ML models to solve business problems. Collaborate with cross-functional teams to identify opportunities for AI integration , translate client requirements into scalable AI-powered solutions , and provide consulting expertise on AI strategies, tools, and implementation. Stay up to date with the latest AI/ML frameworks, tools, and industry trends , ensuring solutions are robust, scalable, and aligned with business goals.
Responsibilities- Design, develop, and deploy AI/ML models to solve business problems.
- Collaborate with cross-functional teams to identify opportunities for AI integration .
- Translate client requirements into scalable AI-powered solutions .
- Provide consulting expertise on AI strategies, tools, and implementation.
- Stay up to date with the latest AI/ML frameworks, tools, and industry trends .
- Ensure solutions are robust, scalable, and aligned with business goals.
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field .
- Strong programming skills in Python, R, or similar languages .
- Experience with TensorFlow, PyTorch, Keras, or other AI frameworks .
- Knowledge of cloud platforms (AWS, Azure, GCP) for AI/ML solutions.
- Excellent problem-solving and consulting skills .
- Strong communication and ability to present AI solutions to non-technical stakeholders.
- Opportunity to work on cutting-edge AI projects .
- Collaborative and innovative work environment .
- Career growth and continuous learning opportunities .
- Be part of an international, forward-thinking team .
Be The First To Know
About the latest Ai research Jobs in United Arab Emirates !
AI Module Development Engineer - IoT & Digital Platforms
Posted today
Job Viewed
Job Description
Job Purpose:
To design, build, and integrate AI modules into IoT/IIoT and digital platforms by managing the full machine learning lifecycle — from model training to deployment — ensuring scalable and business-aligned solutions.
Key Responsibilities:- AI Module Development & Integration – Design and build AI modules that integrate seamlessly into IoT/IIoT and digital platforms.
- Machine Learning Model Lifecycle – Develop, train, validate, and maintain ML models for predictive maintenance, anomaly detection, optimization, and risk scoring.
- System & Platform Integration – Work with APIs, data pipelines, and dashboards to integrate AI solutions into enterprise and industrial systems.
- Model Training – Prepare data, select algorithms, and train models with high accuracy and scalability.
- Production Deployment – Deploy, monitor, and optimize ML models in production environments (cloud and edge).
- Collaborate with cross-functional product and engineering teams to ensure AI solutions meet business requirements.
- Bachelor’s or Master’s degree in Computer Science, Data Science, AI/ML, or related field.
- Strong expertise in AI/ML frameworks (TensorFlow, PyTorch, Scikit-Learn).
- Proven hands-on experience in ML model training and production deployment .
- Solid understanding of IoT/IIoT platforms (AWS IoT, Azure IoT, or custom).
- Experience in data engineering & API integration .
- Familiarity with cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
- End-to-end ownership of ML model lifecycle.
- Strong analytical and problem-solving skills.
- Ability to bridge technical solutions with business impact.
AI/ML Intern - Full-Stack AI Solutions Development
Posted today
Job Viewed
Job Description
We are looking for a passionate AI/ML Intern to help us design, develop, and deploy AI-powered solutions and products. This role involves building both AI/ML models and the applications or platforms where these models will be utilized. The ideal candidate should have strong programming skills, a solid understanding of machine learning and AI concepts, and the ability to develop end-to-end solutions, including applications leveraging LLMs (Large Language Models) and agentic systems.
Responsibilities AI/ML Model Development- Build, train, and optimize machine learning models for various use cases.
- Research and integrate Large Language Models (LLMs) and agent-based AI systems.
- Develop scalable applications or platforms to host AI/ML models.
- Implement APIs or integrations to enable seamless interaction with AI systems.
- Collaborate with cross-functional teams to define product requirements and workflows.
- Ensure high-quality delivery by testing and debugging applications and models.
- Identify technical challenges and propose innovative solutions.
- Optimize performance and scalability of AI models and applications.
- Stay updated on advancements in AI/ML, especially in agentic systems and LLMs.
- Experiment with state-of-the-art frameworks and libraries.
- Proficiency in Python and at least one other programming language (e.g., C , Java , etc.).
- Strong experience in data structures, algorithms, and writing clean, efficient, and maintainable code.
- Hands-on experience in building and deploying web or desktop applications.
- Familiarity with frontend and backend technologies, frameworks, and deployment pipelines.
- Knowledge of core machine learning algorithms, deep learning frameworks (e.g., TensorFlow, PyTorch), and model evaluation techniques.
- Experience working with real-world datasets and developing models to solve practical problems.
- Familiarity with LLM frameworks (e.g., OpenAI, LangChain) and experience building AI agents or similar systems.
- Understanding concepts like reinforcement learning, multi-agent systems, or autonomous agents is a plus.
- Strong analytical and critical thinking abilities to solve complex problems efficiently.
- Ability to work independently or in a team to deliver innovative solutions under deadlines.
- Hands-on experience in designing and deploying AI-powered products from scratch.
- Exposure to cutting-edge AI technologies and frameworks.
- Mentorship from industry experts and an opportunity to work on impactful projects.
Duration: 3-6 Months
Location: Onsite
Compensation: Unpaid
Machine Learning Engineer
Posted today
Job Viewed
Job Description
Messilat is seeking a talented MLOps Engineer for our client's team. The perfect candidate will excel in deploying models, managing microservices, utilizing Docker, and maintaining Kubernetes. Our client is also exploring AI applications in customer service, cybersecurity, and compliance.
Key Responsibilities
- Transition models from development to production, ensuring scalability and high performance. Collaborate with development teams for seamless model deployment.
- Implement monitoring and maintenance strategies for deployed models to ensure ongoing accuracy and reliability.
- Maintain Kubernetes clusters to ensure high availability and performance.
- Work with cross-functional teams to understand business requirements and deliver effective machine learning solutions.
- Develop and implement strategies that optimize efficiency and data quality.
Qualifications
- Minimum of 3 years of experience in MLOps or a related field.
- Expertise in model deployment, containerization, and orchestration (e.g., Docker, Kubernetes).
- Familiarity with cloud platforms (e.g., AWS, or on-premises) for model deployment and management.
- Experience in deploying AI models and managing their lifecycle.
- Proficiency in Python for scripting and automation.
- Prior experience in addressing scalability and pricing concerns in ML operations.
Skills
- Model deployment, monitoring, Docker, Kubernetes, AWS Cloud services, on-premises deployment, collaboration.
If you are passionate about MLOps and looking for a challenging role where you can make a significant impact, we would love to hear from you. Apply today!
#J-18808-Ljbffr