cs.AI(2024-07-23)

📊 共 10 篇论文 | 🔗 3 篇有代码

🎯 兴趣领域导航

支柱九:具身大模型 (Embodied Foundation Models) (9 🔗3) 支柱一:机器人控制 (Robot Control) (1)

🔬 支柱九:具身大模型 (Embodied Foundation Models) (9 篇)

#题目一句话要点标签🔗
1 UniMEL: A Unified Framework for Multimodal Entity Linking with Large Language Models UniMEL:一个基于大语言模型的多模态实体链接统一框架 large language model multimodal
2 OpenHands: An Open Platform for AI Software Developers as Generalist Agents OpenHands:一个面向AI软件开发者作为通用智能体的开放平台 generalist agent large language model
3 RedAgent: Red Teaming Large Language Models with Context-aware Autonomous Language Agent RedAgent:提出上下文感知自主语言Agent,用于LLM的红队测试 large language model
4 Prompt Injection Attacks on Large Language Models in Oncology 揭示医学视觉语言模型易受提示注入攻击的安全漏洞 large language model
5 Artificial Agency and Large Language Models 提出基于动态框架的人工智能体阈值概念模型,并探讨LLM实现人工自主性的可行性。 large language model
6 HAPFI: History-Aware Planning based on Fused Information 提出HAPFI:一种融合历史信息的多模态具身指令跟随规划方法 instruction following
7 Rome was Not Built in a Single Step: Hierarchical Prompting for LLM-based Chip Design 提出层级提示方法,提升LLM在复杂芯片设计中HDL代码生成能力 large language model
8 Patched RTC: evaluating LLMs for diverse software development tasks 提出Patched RTC评估LLM在软件开发任务中的一致性和鲁棒性 large language model
9 PrimeGuard: Safe and Helpful LLMs through Tuning-Free Routing PrimeGuard:通过免调优路由实现安全且有用的LLM instruction following

🔬 支柱一:机器人控制 (Robot Control) (1 篇)

#题目一句话要点标签🔗
10 LLMs can be Dangerous Reasoners: Analyzing-based Jailbreak Attack on Large Language Models 提出基于分析的越狱攻击(ABJ),利用LLM推理过程漏洞绕过安全机制。 manipulation large language model multimodal

⬅️ 返回 cs.AI 首页 · 🏠 返回主页