| Name | Modified | Size | Downloads / Week |
|---|---|---|---|
| Parent folder | |||
| cubefs-3.5.3-linux-amd64.tar.gz.asc | 2025-12-23 | 490 Bytes | |
| cubefs-3.5.3-linux-amd64.tar.gz.sha256sum | 2025-12-23 | 98 Bytes | |
| cubefs-3.5.3-linux-amd64.tar.gz | 2025-12-23 | 104.5 MB | |
| README.md | 2025-12-22 | 3.5 kB | |
| Release v3.5.3 - 2025_12_23 source code.tar.gz | 2025-12-22 | 52.3 MB | |
| Release v3.5.3 - 2025_12_23 source code.zip | 2025-12-22 | 55.6 MB | |
| Totals: 6 Items | 212.5 MB | 0 | |
UPGRAGDE NOTICE
If you are using a CubeFS version earlier than v3.5.0, please refer to the UPGRADE NOTICE in version v3.5.0 for detailed upgrade steps and upgrade to v3.5.0 first.
Upgrade nodes in this order: flshnode → master → datanode → metanode → objectnode → lcnode → cli → client.
Upgrade lcnode and flashnode when needed.
Deploy flashgroupmanager when needed
Clients should use versions later than 3.2.0. Older versions need to be upgraded promptly; otherwise, there will be a risk of compromising stability.
Main Feature
High-throughput LLM/MLLM training (Tolerable storage-compute disaggregation latency: 8ms)
client: Support asynchronous flush for extent handler to improve write performance. Write speed exceeds 1.2 GB/s; on a high-spec H20 training node, a single client can achieve 10+ GB/s aggregate throughput with 10 concurrent large-file writes.(#3973,@bboyCH4 @leonrayang @Victor1319)client: Optimize the client read-ahead mechanism and memory footprint; single file(10GB) read speeds exceed 2 GB/s. (#3982,@bboyCH4 @Victor1319 )client: Metadata cache acceleration for small-file prewarm (#3995,@Victor1319)
Note: Refer to the latest community documentation for enabling and tuning.
Distributed cache can run as an independent service
flashgroupmanager: Introduce flashgroupmanager node and topology to support flashnode cluster management. (@bboyCH4)flashnode: Support block-level data read and write operations. (#3977, @clinx)flashnode: Support Master issuing cache warmup tasks to FlashNode for directory warmup. (#3997, @clinx)tools: Addrctest(benchmark) andrcconfig(config) tools for remote cache system. (#3981,@bboyCH4,@clinx)client: Provide SDK for FlashNode object storage data block upload/download service (#3985,@bboyCH4,@clinx)client: Implement NearRead strategy to prioritize reading from the nearest replica to reduce latency. (#3976,@zhumingze1108)
Enhance
client: Fuse library supports parallel processing of FUSE requests to improve concurrency. (#3974,@Victor1319)client: Optimize metadata cache performance. (#3974,@Victor1319)client: AddtcpAliveTimeparameter for better TCP connection management. (#3974,@Victor1319)master: Support DP decommission status evolution history query. (#3987,@shuqiang-zheng)master: AddTryDecommissionRunningDiskIgnoreDpsto support differentiated strategies for disk decommission based on different reasons. (#3975,@shuqiang-zheng)master: Add audit logs formigrateMetaPartitionand record reasons for DP migration/rollback. (#3975,@shuqiang-zheng)datanode: Support reason passthrough for DP migration. (#3975,@shuqiang-zheng)flashnode: Optimize cache operation opcodes and processing logic. (#3988,@clinx,@bboyCH4)
Bugfix
master: Decommission token consumed twice on restart during two-replica DP decommissioning(#3978,@shuqiang-zheng)client: Fixltp iogen01test failure when pre-reading (ahead read) is enabled. (#3980,@clinx)client: Offset calculation error during client readahead with partial hits(#3979,@bboyCH4)master: Some DPs remain in decommission queue when disk offline marking fails, affecting subsequent decommissions (#3983,@shuqiang-zheng)master: Incorrect disk/node decommission progress display for 2-replica DPs due to leader change(#3984,@Victor1319)