| 1 |
Large Language Models Can Verbatim Reproduce Long Malicious Sequences |
大型语言模型易受后门攻击,可精确复现长恶意序列 |
large language model |
|
|
| 2 |
Large Language Model Compression via the Nested Activation-Aware Decomposition |
提出嵌套激活感知分解(NSVD)方法,用于高效压缩大型语言模型。 |
large language model |
|
|
| 3 |
Enhanced Smart Contract Reputability Analysis using Multimodal Data Fusion on Ethereum |
提出基于多模态数据融合的智能合约信誉分析方法,提升以太坊生态信任度。 |
multimodal |
|
|
| 4 |
Lie Detector: Unified Backdoor Detection via Cross-Examination Framework |
提出统一的后门检测框架以解决安全风险问题 |
large language model multimodal |
|
|
| 5 |
Fairness-Driven LLM-based Causal Discovery with Active Learning and Dynamic Scoring |
提出基于LLM、主动学习和动态评分的因果发现框架,提升公平性分析效率。 |
large language model |
|
|
| 6 |
Improving Quantization with Post-Training Model Expansion |
提出后训练模型扩展方法,在量化LLM时提升模型质量并降低体积。 |
large language model |
|
|
| 7 |
Variance Control via Weight Rescaling in LLM Pre-training |
提出LIR初始化与TVR方差控制,提升LLM预训练性能并降低量化难度 |
large language model |
✅ |
|
| 8 |
LEMMA: Learning from Errors for MatheMatical Advancement in LLMs |
LEMMA:通过从错误中学习提升LLM的数学推理能力 |
large language model |
|
|
| 9 |
FactSelfCheck: Fact-Level Black-Box Hallucination Detection for LLMs |
FactSelfCheck:一种用于LLM的事实级黑盒幻觉检测方法 |
large language model |
|
|
| 10 |
TreeSynth: Synthesizing Diverse Data from Scratch via Tree-Guided Subspace Partitioning |
TreeSynth:通过树引导的子空间划分从零合成多样化数据 |
large language model |
✅ |
|
| 11 |
Deterministic AI Agent Personality Expression through Standard Psychological Diagnostics |
通过标准心理学诊断,实现AI Agent确定性人格表达 |
large language model |
|
|
| 12 |
TRACE: Time SeRies PArameter EffiCient FinE-tuning |
TRACE:面向时间序列基础模型的高效参数微调方法 |
foundation model |
|
|
| 13 |
V-Seek: Accelerating LLM Reasoning on Open-hardware Server-class RISC-V Platforms |
V-Seek:加速LLM在开源RISC-V服务器平台上的推理 |
large language model |
|
|
| 14 |
Improving the End-to-End Efficiency of Offline Inference for Multi-LLM Applications Based on Sampling and Simulation |
提出SamuLLM框架,通过采样和模拟提升多LLM应用离线推理效率。 |
large language model |
|
|
| 15 |
Understanding Bias Reinforcement in LLM Agents Debate |
提出DReaMAD框架,通过多样化推理和优化提示缓解LLM Agent辩论中的偏差强化问题 |
large language model |
|
|
| 16 |
Enabling Global, Human-Centered Explanations for LLMs:From Tokens to Interpretable Code and Test Generation |
提出CodeQ框架,实现代码大模型全局、以人为本的可解释性分析 |
large language model |
|
|