エピソード

  • Marco-o1
    2024/11/23
    🤖 Marco-o1: Towards Open Reasoning Models for Open-Ended Solutions

    The Alibaba MarcoPolo team presents Marco-o1, a large reasoning model designed to excel in open-ended problem-solving. Building upon OpenAI's o1 model, Marco-o1 incorporates Chain-of-Thought fine-tuning, Monte Carlo Tree Search, and innovative reasoning strategies to improve accuracy on complex tasks. The model is trained on a combination of existing and synthetic datasets and shows improvements in accuracy on benchmark datasets, particularly in handling nuanced language translation. Further research focuses on refining the reward system within the Monte Carlo Tree Search and using reinforcement learning to enhance its capabilities. The paper details the model's architecture, training process, and experimental results, highlighting its advancements in open-ended reasoning.

    📎 Link to paper

    続きを読む 一部表示
    15 分
  • Scaling Laws for Precision
    2024/11/18
    ⚖️ Scaling Laws for Precision

    This research paper investigates the impact of precision in training and inference on the performance of large language models. The authors explore how precision affects the effective parameter count and propose scaling laws that predict performance degradation due to low-precision training and post-training quantization. They find that overtrained models are more sensitive to post-training quantization, and that training larger models in lower precision might be computationally optimal. Their unified scaling law accounts for both training and post-training effects and predicts loss in varied precision settings, ultimately suggesting that the standard practice of training models in 16-bit might be suboptimal.

    📎 Link to paper
    🌐 Read their Tweet
    続きを読む 一部表示
    19 分
  • Test-Time Training
    2024/11/14
    ⌛️ The Surprising Effectiveness of Test-Time Training for Abstract Reasoning

    This paper examines how test-time training (TTT) can enhance the abstract reasoning abilities of large language models (LLMs). TTT, which updates model parameters during inference, significantly improves performance on the Abstraction and Reasoning Corpus (ARC) benchmark. Key factors for effective TTT include initial fine-tuning, auxiliary tasks, and instance-specific training. The approach achieves state-of-the-art results on ARC, even matching human averages with program synthesis. This study suggests that dedicating computation at test time, rather than relying on symbolic components, may be essential for complex reasoning tasks.

    📎 Link to paper

    続きを読む 一部表示
    15 分
  • Qwen2.5-Coder
    2024/11/12
    🔷 Qwen2.5-Coder Technical Report

    The report introduces the Qwen2.5-Coder series, which includes the Qwen2.5-Coder-1.5B and Qwen2.5-Coder-7B models. These models are specifically designed for coding tasks and have been pre-trained on a massive dataset of 5.5 trillion code-related tokens. A significant focus is placed on data quality, with detailed cleaning and filtering processes, and advanced training techniques such as file-level and repo-level pre-training. The models were rigorously tested on various benchmarks, including code generation, completion, reasoning, repair, and text-to-SQL tasks, where they demonstrated strong performance, even surpassing larger models in some areas. The report concludes with suggestions for future research, such as scaling model size and enhancing reasoning abilities.

    📎 Link to paper

    続きを読む 一部表示
    24 分
  • Attacking Vision-Language Computer Agents via Pop-ups
    2024/11/09
    😈 Attacking Vision-Language Computer Agents via Pop-ups

    This research paper examines vulnerabilities in vision-language models (VLMs) that power autonomous agents performing computer tasks. The authors show that these VLM agents can be easily tricked into clicking on carefully crafted malicious pop-ups, which humans would typically recognize and avoid. These deceptive pop-ups mislead the agents, disrupting their task performance and reducing success rates. The study tests various pop-up designs across different VLM agents and finds that even simple countermeasures, such as instructing the agent to ignore pop-ups, are ineffective. The authors conclude that these vulnerabilities highlight serious security risks and call for more robust safety measures to ensure reliable agent performance.

    📎 Link to paper

    続きを読む 一部表示
    22 分
  • Number Cookbook
    2024/11/08
    📓 Number Cookbook: Number Understanding of Language Models and How to Improve It

    This research paper examines the numerical understanding and processing abilities (NUPA) of large language models (LLMs). The authors create a benchmark to test LLMs on four numerical representations (integers, floating-point numbers, fractions, and scientific notation) across 17 tasks grouped into four ability categories. They find that, despite strong problem-solving capabilities, LLMs struggle with basic numerical operations. The paper evaluates methods to enhance NUPA during pretraining and finetuning, such as specialized tokenizers, positional encodings, and data formats, and notes the limitations of chain-of-thought techniques for numerical tasks. The authors call for further research to improve LLMs' fundamental numerical capabilities.

    📎 Link to paper
    続きを読む 一部表示
    16 分
  • Jigsaw Puzzles
    2024/11/07
    🧩 Jigsaw Puzzles: Splitting Harmful Questions to Jailbreak Large Language Models

    This research paper investigates the vulnerabilities of large language models (LLMs) to "jailbreak" attacks, where malicious users attempt to trick the model into generating harmful content. The authors propose a new attack strategy called Jigsaw Puzzles (JSP) which breaks down harmful questions into harmless fractions and feeds them to the LLM in multiple turns, bypassing the model's built-in safeguards. The paper explores the effectiveness of JSP across different LLM models and harmful categories, analyzing the role of various prompt designs and splitting strategies. The authors also compare JSP's performance to other existing jailbreak methods and demonstrate its ability to overcome various defense mechanisms. The paper concludes by highlighting the importance of continued research and development of more robust defenses against such attacks.

    📎 Link to paper

    続きを読む 一部表示
    17 分
  • Multi-expert Prompting with LLMs
    2024/11/05
    🤝 Multi-expert Prompting with LLMs

    The research paper presents Multi-expert Prompting, a novel method for improving the reliability, safety, and usefulness of Large Language Models (LLMs). Multi-expert Prompting simulates multiple experts within an LLM, collecting their answers to an instruction and aggregating them into a final response. This process leverages the Nominal Group Technique, a human-designed decision-making framework, to ensure a balanced and comprehensive output, surpassing the limitations of single-expert approaches. The authors demonstrate the method’s effectiveness through thorough evaluation on various benchmarks, highlighting its significant improvements in truthfulness, factuality, toxicity reduction, and overall informativeness compared to existing baselines.

    📎 Link to paper

    続きを読む 一部表示
    13 分