Machine Learning Engineer
SENTIAFORGE PTE. LTD.
Date: 6 hours ago
Area: Singapore, Singapore
Salary:
SGD 5,000
-
SGD 6,500
per month
Contract type: Full time

AI / Machine Learning Engineer
We're an early-stage startup with established clients and a portfolio of contracts, specializing in the delivery of Agentic AI systems across diverse sectors including transportation, marketing, and academia. We're seeking a highly skilled and experienced AI / Machine Learning Engineer to join our dynamic team.
In this pivotal role, you'll be instrumental in designing, developing, deploying, and optimizing cutting-edge AI agents and multi-agent systems that autonomously execute complex actions. You'll work with a comprehensive range of technologies and data modalities, from leading AI frameworks to robust cloud and on-premise deployment solutions, driving innovation in real-world applications.
Key Responsibilities:
- Agentic AI Development: Design, develop, and implement sophisticated AI agents and bots capable of independent operation, complex decision-making, and self-correction within various domain-specific contexts.
Utilize and seamlessly integrate with advanced AI orchestration frameworks such as LangChain and LangGraph , and potentially the Microsoft Bot Framework , to build robust conversational and task-oriented agents.
Work with state-of-the-art large language model (LLM) serving solutions like Ollama and vLLM to ensure efficient and scalable inference for our AI agents.
Implement Retrieval-Augmented Generation (RAG) systems to enable agents to access, synthesize, and generate responses based on external, up-to-date knowledge bases, mitigating hallucinations and ensuring factual accuracy.
Develop and integrate Memory, Reasoning, and Planning (MRP) capabilities within agents, allowing them to maintain context, reason over information, formulate multi-step plans, and adapt to dynamic environments.
Design and implement Agent-to-Agent (A2A) communication protocols , enabling seamless and secure collaboration, task delegation, and information exchange between different autonomous agents, regardless of their underlying frameworks or platforms.
Work across diverse data modalities including image, time-series, text, and graph data , developing models and agents that can interpret, process, and generate insights from heterogeneous data sources. - Deployment and MLOps: Strategically deploy and manage AI agents and systems on leading cloud environments, including AWS, Azure, or Google Cloud Platform (GCP) , ensuring high availability, scalability, and security.
Expose AI functionalities through well-documented and performant APIs, enabling seamless integration with client systems and applications.
Handle on-premise deployments, which includes the end-to-end setup, configuration, and maintenance of Kubernetes clusters for containerized applications.
Implement robust containerization strategies (e.g., Docker) and establish efficient orchestration workflows using tools like Argo to automate deployment, scaling, and management of AI services.
Establish and maintain CI/CD pipelines for AI models and applications, ensuring rapid iteration and reliable delivery. - Model Training & Fine-tuning: Leverage expertise in deep learning frameworks (e.g., TensorFlow, PyTorch) to train, fine-tune, and optimize custom AI models across various data types. This will be crucial where off-the-shelf models don't meet specific project requirements, ensuring peak performance and accuracy.
Conduct experimentation, hyperparameter tuning, and model evaluation to achieve optimal model performance.
Stay abreast of the latest research and advancements in AI/ML, particularly in the areas of large language models and agentic AI, to continuously improve our systems.
Essential Skills and Experience:
- Proven, hands-on experience in the end-to-end development and deployment of AI agents and autonomous systems in production environments.
- Strong proficiency with AI orchestration frameworks such as LangChain, LangGraph , and/or experience with the Microsoft Bot Framework .
- Practical experience with LLM serving technologies such as Ollama or vLLM .
- Demonstrable experience implementing and optimizing Retrieval-Augmented Generation (RAG) systems for enhanced AI outputs.
- Understanding and practical application of Memory, Reasoning, and Planning (MRP) concepts in agentic AI development.
- Familiarity with and/or experience implementing Agent-to-Agent (A2A) communication protocols .
- Experience working with and processing diverse data modalities, including image processing, time-series analysis, natural language processing (NLP), and graph data structures.
- Extensive hands-on experience with at least one major cloud platform (AWS, Azure, or GCP) for deploying and managing AI solutions, including familiarity with relevant services (e.g., EC2, S3, Azure ML, GCP AI Platform).
- In-depth knowledge of containerization technologies (e.g., Docker) and substantial experience with container orchestration using Kubernetes .
- Solid understanding and practical application of MLOps practices and tools , with specific familiarity in orchestration tools like Argo (Argo Workflows, Argo CD) .
- Proficiency in modern deep learning frameworks (e.g., TensorFlow, PyTorch) for model training, fine-tuning, and evaluation.
- Strong programming skills in Python, coupled with experience in version control systems like Git.
- Excellent problem-solving abilities, a strong analytical mindset, and the capacity to thrive in a fast-paced, startup environment.
Bonus Points:
- Experience with knowledge graphs or semantic web technologies for advanced knowledge representation.
- Understanding of prompt engineering and fine-tuning strategies for large language models.
- Familiarity with data governance, security, and privacy best practices in AI deployments.
- Contributions to open-source AI projects or relevant publications.
- Patents and Publications in relevant fields.
See more jobs in Singapore