Audience
AI researchers and ML engineers requiring a solution to experiment with fine-tuning open source language models while outsourcing infrastructure complexity
About Tinker
Tinker is a training API designed for researchers and developers that allows full control over model fine-tuning while abstracting away the infrastructure complexity. It supports primitives and enables users to build custom training loops, supervision logic, and reinforcement learning flows. It currently supports LoRA fine-tuning on open-weight models across both LLama and Qwen families, ranging from small models to large mixture-of-experts architectures. Users write Python code to handle data, loss functions, and algorithmic logic; Tinker handles scheduling, resource allocation, distributed training, and failure recovery behind the scenes. The service lets users download model weights at different checkpoints and doesn’t force them to manage the compute environment. Tinker is delivered as a managed offering; training jobs run on Thinking Machines’ internal GPU infrastructure, freeing users from cluster orchestration.