vLLM 开发动态报告 - 2025-12-31
时间窗口: 2025-12-31 11:02 (UTC+8) ~ 2026-01-01 11:02 (UTC+8) 数据统计: 新 Issue 12 | 关闭 Issue 25 | 新 PR 30 | 合并 PR 12 | 关闭未合并 PR 8
📊 每日开发状态摘要
在跨年期间(2025年12月31日至2026年1月1日),vLLM 项目保持了活跃的开发节奏,共合并了12个PR,关闭了25个Issue。开发焦点集中在修复由近期重大变更(如V1引擎、异步调度默认启用)引发的新问题,包括CI测试失败、硬件兼容性(特别是Blackwell架构)以及内存泄漏等。同时,多模态模型支持(如音频处理、新模型适配)和底层架构优化(如MoE层重构、KV缓存)的讨论与开发也在持续推进。
🎯 AMD/ROCm 生态相关动态
本周期内有若干涉及AMD/ROCm生态的更新,主要集中在测试修复和代码健壮性改进,未发现与Quark量化工具或MI300特定优化相关的内容。
- PR #31597 ([ROCm][CI] Fix language generation test accuracy by disabling HF flash_sdp and mem_efficient_sdp):
- 提交者:AndreasKaratzas
- 内容:修复ROCm平台上语言生成测试的准确性问题。根本原因是HuggingFace Transformers在ROCm上的
flash_sdp和mem_efficient_sdp注意力实现存在数值精度问题。解决方案是在ROCm的测试配置中禁用这两个后端,强制使用更稳定的math_sdp后端作为基线参考。 - 影响:确保了ROCm平台上模型输出对比测试的准确性和可靠性,是维持跨平台测试一致性的重要修复。
- PR #31590 ([Bugfix] Replace BaseException with specific exceptions in FLA utils) & PR #31587 ([Bugfix][Hardware][ROCm] Narrow broad exception in PyNCCL library loading):
- 提交者:c0de128(非AMD员工,但提交涉及ROCm硬件标签)
- 内容:这两个PR属于代码质量改进系列,旨在将
except Exception:或except BaseException:等宽泛的异常捕获替换为具体的异常类型(如RuntimeError,OSError)。 - 影响:虽然不直接提供新功能,但提升了在ROCm(以及所有平台)上调试底层库(如NCCL/RCCL、FLA算子)加载失败或运行时错误的体验,避免了关键异常被意外屏蔽。
- PR #31551 ([ROCm][CI] Update MiniCPM model test…)(已合并):
- 提交者:AndreasKaratzas
- 内容:更新ROCm CI中的测试模型,将
MiniCPM3-4B替换为MiniCPM4.1-8B,并修复了测试中因模型内部嵌入缩放逻辑导致的比对问题。 - 影响:保持ROCm CI测试集的现代性和有效性,同时修正了测试逻辑中的一个潜在缺陷。
小结:本周期AMD生态相关活动侧重于测试稳定性和代码健壮性,旨在解决ROCm平台特有的测试失败问题并改善错误处理,为后续功能开发打下更坚实的基础。
💬 高热度讨论分析
- Issue #31579:
VLLM_FLOAT32_MATMUL_PRECISION=tf32does not set cublas tf32 matmul:- 核心议题:环境变量
VLLM_FLOAT32_MATMUL_PRECISION在PyTorch 2.9.1下无法正确启用TF32矩阵乘精度。 - 观点与讨论:
- 提交者 (cjackal):指出问题,并追溯到PR #30428移除了已弃用的
torch.set_float32_matmul_precision(‘high’)调用。他质疑为何设置torch.backends.cuda.matmul.fp32_precision = “tf32”不够,而TORCH_ALLOW_TF32_CUBLAS_OVERRIDE=1却有效。 - 维护者 (yewentao256):确认在PyTorch 2.9.0中存在相关警告,但在2.9.1中已被移除,因此提议回退之前的修复(即PR #30428),恢复原有逻辑。
- 提交者 (cjackal):指出问题,并追溯到PR #30428移除了已弃用的
- 争议焦点:无实质性争议,主要是在确认PyTorch API变更后的修复方向。
- 当前状态:问题开放中,维护者已提交了旨在回退的修复PR (#31585)。
- 核心议题:环境变量
- Issue #22383: TypeError: FlashAttentionImpl.__init__() got an unexpected keyword argument ‘sinks’(已关闭):
- 核心议题:启用FP8 KV缓存等功能时,FlashAttention后端因接收到意外的
sinks参数而初始化失败。 - 观点与讨论:
- 用户们:报告了在各种配置(GPT-OSS模型、V1/V0引擎、不同CUDA版本、开启tracing等)下遇到相同错误。
- 解决方案尝试:社区提供了多种临时解决方案,包括修改源码、切换注意力后端(FlashInfer)、或设置
VLLM_USE_V1=1。有用户指出PR #22320可能已修复,但问题在特定条件下仍出现。
- 争议焦点:无对立观点,主要是在寻求和分享解决方案。问题凸显了新功能(如sinks)与现有后端兼容性的挑战。
- 最终结论:该Issue因超过90天无新活动而被机器人自动关闭,但反映的兼容性问题可能在其他地方被持续处理。
- 核心议题:启用FP8 KV缓存等功能时,FlashAttention后端因接收到意外的
- Issue #31559: [CI Failure]: Failed to upload and process pipeline: Pipeline upload rejected(已关闭):
- 核心议题:CI流水线因配置中出现重复的
key而上传失败。 - 观点与讨论:
- 提交者 (BlankRH):报告错误并初步归因于某个AMD CI PR,但被纠正。
- 维护者 (tjtanaa):快速定位到真正的罪魁祸首是另一个提交(ab1af6aa),该提交在YAML中重复定义了同一个
block的key。他立即着手修复。
- 争议焦点:无争议,是一次高效的故障排查和修复协作。
- 最终结论:问题在创建后约40分钟内,通过PR #31562的合并得到解决,体现了CI维护的快速响应。
- 核心议题:CI流水线因配置中出现重复的
🔥 热门话题与趋势分析
- V1引擎与异步调用的“阵痛期”:多个新增和关闭的Issue都与默认启用的异步调度(PR #27614)相关,导致CI测试失败(#31570)和新硬件平台崩溃(#31588)。这表明新架构在广泛部署后正经历稳定性考验。
- 多模态支持持续深化:新增PR涉及音频通道规范化框架(#31595)、支持新模型GLM-ASR(#31436已合并)和IQuestCoder(#31575)、以及为DeepSeek-OCR添加LoRA支持(#31569),显示多模态领域仍是扩展重点。
- GPU硬件兼容性挑战:Issue中频繁出现B200、B300、Blackwell GB10等最新 NVIDIA 硬件的身影,涉及量化(#31594)、FP8支持、驱动/PyTorch版本适配等问题,凸显了vLLM紧跟前沿硬件所面临的持续适配压力。
- MoE架构演进与优化:关于重构
MoELayer的RFC(#31578)和多个修复MoE相关计算形状(#31596)、量化配置(#31593)的PR,表明团队正在梳理复杂的MoE实现,为未来性能优化和功能增强做准备。 - CI/CD与测试稳健性:除了上述CI失败修复,还有针对测试中令牌计数假设(#31565)、ROCm测试准确性(#31597)的修复,反映出对测试质量和跨平台一致性的高度重视。
🛠️ 重点技术变更
- PR #31584 ([BugFix] Fix async scheduling for pooling models)(已合并):
- 技术解读:修复了异步调度模式下,池化(Pooling)模型存在的竞态条件问题。同时将输出数据的CPU拷贝移至专用流,以解锁异步调度的性能增益。
- 影响:直接解决了Issue #31570中的CI失败,增强了V1引擎在运行分类/排序类模型时的稳定性,并为提升异步模式性能做了铺垫。
- PR #31596 ([MoE] Fix output_shape calculation in Attention layer to handle 3D query inputs):
- 技术解读:修复了PR #28775引入的回归错误。原代码在计算
output_shape时,错误地假设查询输入(query)总是2D[num_tokens, hidden_dim],而DeepSeek-V2/V3等模型的特定层会传入3D[num_tokens, num_heads, head_dim]张量,导致后续形状不匹配和FP8量化内核失败。 - 影响:修复了影响DeepSeek系列模型(使用MLA和MTP层)在特定配置下的运行错误,保证了Attention层对输入形状的鲁棒性。
- 技术解读:修复了PR #28775引入的回归错误。原代码在计算
- PR #31585 ([Bug] Revert torch warning fix):
- 技术解读:计划回退之前为消除PyTorch弃用警告(PR #30428)所做的修改。原因是PyTorch 2.9.1中已移除相关警告,且当前的“修复”导致了TF32精度无法正确启用(Issue #31579)。
- 影响:恢复正确的TF32矩阵乘行为,确保在支持TF32的硬件(如Ampere, Hopper)上获得预期的性能与精度平衡。
📈 开发活跃度观察
- 高效合并与问题闭环:24小时内合并12个PR,关闭25个Issue(其中15个为陈旧Issue自动清理),显示团队在持续推进功能开发的同时,也注重问题队列的维护。
- 贡献者多样化:活跃贡献者包括专注于AMD生态的开发者(AndreasKaratzas)、红帽员工(robertgshaw2-redhat, c0de128)、以及来自Intel、Amazon、Meta等公司的开发者,表明vLLM吸引了广泛的产业界参与。
- 代码审查与CI响应迅速:对于阻塞CI的关键问题(#31559)能在不到一小时内定位并修复合并,体现了成熟的运维流程。
💡 值得关注的问题
- Issue #31588: vLLM SM 12.1 (Blackwell GB10) V1 Engine Bug Report: 在最新Blackwell GB10 GPU上,V1引擎因异步调度和缺失
None检查而崩溃。此问题影响所有SM 12.1用户,是新硬件适配的关键阻塞问题。 - Issue #31577: Memory leak in serving Whisper: 报告在服务Whisper语音模型时出现内存泄漏,且已有其他用户确认。内存泄漏问题对生产部署稳定性影响重大,需要优先排查。
- Issue #31578: [Feature]: New MoELayer: 这是一个关于重构复杂MoE层的设计讨论(RFC)。旨在将当前分散在
FusedMoE中的各种优化逻辑抽象为统一的MoELayer管理。此重构可能对未来的MoE性能、可维护性和新功能添加产生深远影响,值得社区关注其设计进展。
📋 附录:详细数据列表
新增 Issue
- #31594 [Bug]: MiniMax-M2.1 DP8EP Recipe Error on B200 — bug — by kimbochen (创建于: 2026-01-01 09:01 (UTC+8))
- #31570 [CI Failure]: Pooling models (Classification model) + tp failure in Full CI run — ci-failure — by noooop (创建于: 2025-12-31 16:15 (UTC+8))
- #31588 [Bug]: vLLM SM 12.1 (Blackwell GB10) V1 Engine Bug Report (Relates to: #28589, #31128, #28621, #27679) — bug — by ohsono (创建于: 2026-01-01 03:41 (UTC+8))
- #31579 [Bug]:
VLLM_FLOAT32_MATMUL_PRECISION=tf32does not set cublas tf32 matmul — bug — by cjackal (创建于: 2025-12-31 21:27 (UTC+8)) - #31582 [Bug]: Cannot run model — bug — by frenzybiscuit (创建于: 2026-01-01 00:02 (UTC+8))
- #31577 [Bug]: Memory leak in serving Whisper — bug — by parssky (创建于: 2025-12-31 20:28 (UTC+8))
- #31578 [Feature]: New MoELayer — feature request — by robertgshaw2-redhat (创建于: 2025-12-31 20:40 (UTC+8))
- #31574 [Usage]: If vllm surpport load LoRA adapter and DeepSeek-v3.1-termunis at the same time — usage — by AIR-hl (创建于: 2025-12-31 18:33 (UTC+8))
- #31567 [RFC]: Why custom_mask is not exposed on FlashInfer to get more flexible use case? — RFC — by npuichigo (创建于: 2025-12-31 14:00 (UTC+8))
- #31564 [Bug]: Qwen3-VL-8B-Instruct has accuracy issue - Multi modal accuracy issue — bug — by Dineshkumar-Anandan-ZS0367 (创建于: 2025-12-31 13:13 (UTC+8))
- #31559 [CI Failure]: Failed to upload and process pipeline: Pipeline upload rejected — ci-failure — by BlankRH (创建于: 2025-12-31 12:22 (UTC+8))
- #31557 [Bug]: DeepSeek on B300 reports
invalid numeric default valueerror — bug — by kebe7jun (创建于: 2025-12-31 11:34 (UTC+8))
已关闭 Issue
- #18887 [Bug]: FlashMLA V1 with FP8 KV cache not yet supported! — bug,stale — by tensorflowt (关闭于: 2026-01-01 10:18 (UTC+8))
- #19007 [Feature]: Individual GuidedDecodingParams for each prompt in prompts. — feature request,stale — by NilsHellwig (关闭于: 2026-01-01 10:18 (UTC+8))
- #19489 [Bug]: Phi-4-mini-instruct / Phi-4-multimodal-instruct produces gibberish when input <4096 tokens and output is >4096 tokens — bug,stale — by Roman-Malinowski (关闭于: 2026-01-01 10:18 (UTC+8))
- #19846 [Bug]: When “tool_choice”: “auto” is set, there is a reasoning_content process in the output, but this process is missing when “tool_choice”: “required” is used. — bug,stale — by ericperfect (关闭于: 2026-01-01 10:18 (UTC+8))
- #22383 [Bug]: TypeError: FlashAttentionImpl.init() got an unexpected keyword argument ‘sinks’ — bug,stale — by shashankgaur3 (关闭于: 2026-01-01 10:17 (UTC+8))
- #23301 [Usage]: During testing of the LoRA model, the “enable-prefix-caching” feature did not take effect — usage,stale — by xiangfei01 (关闭于: 2026-01-01 10:17 (UTC+8))
- #23344 [Feature][Wide EP]: Add NIXL, DeepEP, DeepGEMM, and PPLX to Docker Image — feature request,stale — by robertgshaw2-redhat (关闭于: 2026-01-01 10:17 (UTC+8))
- #23784 [Usage]: minicpm-4.5v — usage,stale — by hyyuananran (关闭于: 2026-01-01 10:17 (UTC+8))
- #23860 [Bug]: Setting up vLLM with a multi-host for example v6e-4x4 TPU topology fails — usage,stale — by SinaChavoshi (关闭于: 2026-01-01 10:17 (UTC+8))
- #23922 [Bug]: Unrecognized FP8 dtype: fp8_e5m2 — bug,stale — by JackLeeHal (关闭于: 2026-01-01 10:17 (UTC+8))
- #23977 [Feature]: Benchmark for the Sampler — good first issue,feature request,stale — by houseroad (关闭于: 2026-01-01 10:16 (UTC+8))
- #23979 [Bug]: new version critical bug with 100% gpu util but get stuck — bug,stale — by yanan1116 (关闭于: 2026-01-01 10:16 (UTC+8))
- #24082 [Bug]: v1.10.x is slower than 0.8.5.post1 when running qwen3 — bug,stale — by qiulang (关闭于: 2026-01-01 10:16 (UTC+8))
- #24090 [New Model]: OpenCUA — stale — by shzirui (关闭于: 2026-01-01 10:16 (UTC+8))
- #24095 [Usage]: What is the benchmark configuration? — usage,stale — by ceci3 (关闭于: 2026-01-01 10:16 (UTC+8))
- #24109 [Bug]: DeepSeek fails with enabled VLLM_USE_FLASHINFER_MOE_FP8=1 — bug,stale — by alexm-redhat (关闭于: 2026-01-01 10:16 (UTC+8))
- #24117 [Feature]: Optimize DP/EP Low Batch Size Decode DeepSeek-R1 — feature request,stale — by robertgshaw2-redhat (关闭于: 2026-01-01 10:16 (UTC+8))
- #24133 [Bug]: vLLM stuck when serving GLM-4.5 model — bug,stale — by sarckk (关闭于: 2026-01-01 10:16 (UTC+8))
- #31570 [CI Failure]: Pooling models (Classification model) + tp failure in Full CI run — ci-failure — by noooop (关闭于: 2026-01-01 06:48 (UTC+8))
- #31582 [Bug]: Cannot run model — bug — by frenzybiscuit (关闭于: 2026-01-01 00:41 (UTC+8))
- #23578 [Installation]: — installation,stale — by quencs (关闭于: 2025-12-31 23:27 (UTC+8))
- #31529 [Bug]: vllm 0.12.0 fail to start Qwen3-VL-30B-A3B-Thinking — bug — by Arashi19901001 (关闭于: 2025-12-31 15:56 (UTC+8))
- #31555 [Docs] Feedback for
/en/stable/MONSTERDOG — documentation — by s33765387-cpu (关闭于: 2025-12-31 13:18 (UTC+8)) - #31559 [CI Failure]: Failed to upload and process pipeline: Pipeline upload rejected — ci-failure — by BlankRH (关闭于: 2025-12-31 13:04 (UTC+8))
- #31518 [Bug]: NUMA node validation incorrectly compares against CPU IDs instead of NUMA node IDs — bug — by SameerAsal (关闭于: 2025-12-31 12:06 (UTC+8))
新增 PR
- #31597 [ROCm][CI] Fix language generation test accuracy by disabling HF flash_sdp and mem_efficient_sdp — rocm — by AndreasKaratzas (创建于: 2026-01-01 10:55 (UTC+8))
- #31575 [Model] Support IQuestCoder model — new-model — by yxing-bj (创建于: 2025-12-31 19:08 (UTC+8))
- #31596 [MoE] Fix output_shape calculation in Attention layer to handle 3D query inputs — 无标签 — by AndreasKaratzas (创建于: 2026-01-01 10:24 (UTC+8))
- #31595 Fix audio mono dimension — documentation,multi-modality,qwen — by jeremyteboul (创建于: 2026-01-01 09:53 (UTC+8))
- #31593 Fix flashinfer experts quant config hack — llama,nvidia — by robertgshaw2-redhat (创建于: 2026-01-01 08:43 (UTC+8))
- #31585 [Bug] Revert torch warning fix — bug,ready,v1 — by yewentao256 (创建于: 2026-01-01 03:14 (UTC+8))
- #31592 feat(kv-cache): support multiple sliding window groups in HybridKVCac… — v1 — by DZADSL72-00558 (创建于: 2026-01-01 08:26 (UTC+8))
- #31576 feat: add vllm.utils.device_utils module — 无标签 — by codebasecomprehension (创建于: 2025-12-31 19:18 (UTC+8))
- #31569 feat: support LoRA for DeepSeek-OCR(Language Model part) — documentation,ready,deepseek — by zhima771 (创建于: 2025-12-31 15:54 (UTC+8))
- #31565 [CI][Bugfix] Fix token counting in chunked prefill streaming test — ready — by AndreasKaratzas (创建于: 2025-12-31 13:41 (UTC+8))
- #31584 [BugFix] Fix async scheduling for pooling models — bug,ready,v1 — by njhill (创建于: 2026-01-01 02:56 (UTC+8))
- #31590 [Bugfix] Replace BaseException with specific exceptions in FLA utils — ready — by c0de128 (创建于: 2026-01-01 04:04 (UTC+8))
- #31568 [Core] Optimize group size selection for hybrid KV cache — v1 — by DZADSL72-00558 (创建于: 2025-12-31 14:59 (UTC+8))
- #31580 [Bugfix]: update global_rank when adjusting rpc_rank to fix layer key error — v1 — by zhaoninge (创建于: 2025-12-31 22:32 (UTC+8))
- #31591 [Misc] Tidy up some spec decode logic in GPUModelRunner — ready,v1 — by njhill (创建于: 2026-01-01 04:22 (UTC+8))
- #31589 [Bugfix] Narrow broad exceptions in rank detection functions — 无标签 — by c0de128 (创建于: 2026-01-01 04:04 (UTC+8))
- #31587 [Bugfix][Hardware][ROCm] Narrow broad exception in PyNCCL library loading — rocm — by c0de128 (创建于: 2026-01-01 03:37 (UTC+8))
- #31586 [Bugfix] Narrow broad exception in custom all-reduce detection — 无标签 — by c0de128 (创建于: 2026-01-01 03:37 (UTC+8))
- #31583 [BugFix] scheduler: Fix resuming of preempted requests after async load — v1 — by orozery (创建于: 2026-01-01 02:12 (UTC+8))
- #31581 [Frontend] [Bugfix] respect server-level default chat template kwargs in reasoning parser — frontend — by cjackal (创建于: 2025-12-31 23:57 (UTC+8))
- #31572 [Bugfix] Fix activation quantization for compressed-tensors W4A16 — ready — by Tmn07 (创建于: 2025-12-31 17:41 (UTC+8))
- #31573 [P/D] Refactor mooncake connector sender thread using async coroutines — kv-connector — by dtcccc (创建于: 2025-12-31 17:54 (UTC+8))
- #31571 [Quantization][MoE] remove unused ep logic from moe marlin — 无标签 — by jinzhen-lin (创建于: 2025-12-31 17:18 (UTC+8))
- #31556 feat: enhance environment variable handling with improved security fi… — 无标签 — by leejianwoo-collab (创建于: 2025-12-31 11:23 (UTC+8))
- #31563 [Model] Support SentenceTransformers V6 reranker config — documentation,frontend — by noooop (创建于: 2025-12-31 13:04 (UTC+8))
- #31561 [Feat][PP] support async send for PP — v1 — by pisceskkk (创建于: 2025-12-31 12:58 (UTC+8))
- #31566 [Bugfix] Adjust default parameters in test_completion — v1 — by 1643661061leo (创建于: 2025-12-31 13:57 (UTC+8))
- #31558 [Bugfix] fix routing_method_type default value — nvidia — by kebe7jun (创建于: 2025-12-31 11:36 (UTC+8))
- #31560 [Bugfix] Modify some parameters to ensure the completion test passes — ci/build,v1 — by 1643661061leo (创建于: 2025-12-31 12:45 (UTC+8))
- #31562 [CI] [Critical] [CUDA] Fix duplicated test name — ready,ci/build,nvidia — by tjtanaa (创建于: 2025-12-31 12:59 (UTC+8))
已合并 PR
- #29279 [Audio] Improve Audio Inference Scripts (offline/online) — documentation,ready — by ekagra-ranjan (合并于: 2026-01-01 07:34 (UTC+8))
- #31565 [CI][Bugfix] Fix token counting in chunked prefill streaming test — ready — by AndreasKaratzas (合并于: 2026-01-01 07:05 (UTC+8))
- #31584 [BugFix] Fix async scheduling for pooling models — bug,ready,v1 — by njhill (合并于: 2026-01-01 06:48 (UTC+8))
- #31546 [Bugfix] Fix BAGEL online serving for text and image understanding — ready — by Dylan1229 (合并于: 2026-01-01 06:46 (UTC+8))
- #31436 Add GLM-ASR multimodal support — documentation,new-model,ready,multi-modality — by baonudesifeizhai (合并于: 2025-12-31 23:12 (UTC+8))
- #31551 [ROCm][CI] Update MiniCPM model test: MiniCPM3-4B to MiniCPM4.1-8B and simplify attention backend testing — rocm,ready — by AndreasKaratzas (合并于: 2025-12-31 16:12 (UTC+8))
- #31003 [Mics] add pcp basic support to MoE model — ready — by pisceskkk (合并于: 2025-12-31 12:01 (UTC+8))
- #31562 [CI] [Critical] [CUDA] Fix duplicated test name — ready,ci/build,nvidia — by tjtanaa (合并于: 2025-12-31 13:01 (UTC+8))
- #31390 [Bug] Fix log issue with
\n— ready,v1 — by yewentao256 (合并于: 2025-12-31 13:16 (UTC+8)) - #31539 Add get_expert_mapping to NemotronHModel (for LoRA support) — ready — by danisereb (合并于: 2025-12-31 13:09 (UTC+8))
- #31517 [Core] Remove unused
num_tokensparameter from_init_model_kwargs— ready,v1 — by maang-h (合并于: 2025-12-31 12:47 (UTC+8)) - #31520 [BugFix] Fix NUMA node validation in CPU platform — ready,cpu — by SameerAsal (合并于: 2025-12-31 12:06 (UTC+8))
关闭但未合并的 PR
- #21365 fix: return {} for tool arguments when no argument is needed, so that… — frontend,stale,tool-calling — by web3-luoxi (关闭于: 2026-01-01 10:18 (UTC+8))
- #22706 [Feat] Support elastic KV cache memory pool for dynamic GPU memory sharing — needs-rebase,stale,v1 — by ivanium (关闭于: 2026-01-01 10:17 (UTC+8))
- #24030 Fix typo in test_attention_backends.py — stale,v1 — by initzhang (关闭于: 2026-01-01 10:16 (UTC+8))
- #24084 [Bugfix] Fix AssertionError in cache_full_blocks due to dirty blocks — stale,v1 — by nnding (关闭于: 2026-01-01 10:16 (UTC+8))
- #24099 The downloaded tags directory is missing a
.gitfolder, which is ca… — ci/build,stale — by duhanmin (关闭于: 2026-01-01 10:16 (UTC+8)) - #24103 [Bugfix] sliding_window AttributeError — stale — by hongkunyoo (关闭于: 2026-01-01 10:16 (UTC+8))
- #31566 [Bugfix] Adjust default parameters in test_completion — v1 — by 1643661061leo (关闭于: 2025-12-31 13:58 (UTC+8))
- #31560 [Bugfix] Modify some parameters to ensure the completion test passes — ci/build,v1 — by 1643661061leo (关闭于: 2025-12-31 13:17 (UTC+8))