Jan-v1-edge
Jan-v1-edge: efficient 1.7B reasoning model optimized for edge devices
...The model was refined through a two-stage post-training process: Supervised Fine-Tuning (SFT) to transfer knowledge from Jan-v1, followed by Reinforcement Learning with Verifiable Rewards (RLVR) to optimize reasoning, tool use, and correctness. With just 1.7B parameters, Jan-v1-edge achieves 83% accuracy on SimpleQA tasks, approaching the performance of larger models like Jan-nano-128k. Benchmark comparisons show it remains competitive or superior in areas such as EQBench and recency QA, though with slight trade-offs in instruction following and creative writing compared to similar-sized Qwen models.