BAIR – Berkeley Artificial Intelligence Research AI news

View books and computing supplies on the AI industry from Amazon

 

The BAIR Blog
  1. As computer vision researchers, we believe that every pixel can tell a story. However, there seems to be a writer’s block settling into the field when it comes to dealing with large images. Large images are no longer rare—the cameras we carry in our pockets and those orbiting our planet snap pictures so big and detailed that they stretch our current best models and hardware to their breaking

    ...
  2. Every year, the Berkeley Artificial Intelligence Research (BAIR) Lab graduates some of the most talented and innovative minds in artificial intelligence and machine learning. Our Ph.D. graduates have each expanded the frontiers of AI research and are now ready to embark on new adventures in academia, industry, and beyond.

    These fantastic individuals bring with them a wealth of

    ...
  3. AI caught everyone’s attention in 2023 with Large Language Models (LLMs) that can be instructed to perform general tasks, such as translation or coding, just by prompting. This naturally led to an intense focus on models as the primary ingredient in AI application development, with everyone wondering what capabilities new LLMs will bring. As more developers begin to build using LLMs, however,

    ...

  4. The structure of Ghostbuster, our new state-of-the-art method for detecting AI-generated text.

    Large language models like ChatGPT write impressively well—so well, in fact, that they’ve become a problem. Students have begun

    ...
  5. Asymmetric Certified Robustness via Feature-Convex Neural Networks

    TLDR: We propose the asymmetric certified robustness problem, which requires certified robustness for only one class and reflects real-world adversarial scenarios. This focused setting allows us to introduce feature-convex classifiers, which produce closed-form and deterministic certified radii on

    ...
  6. Goal Representations for Instruction Following


    -- Figure title. Figure caption. This image is centered and set to 50% page width. -->

    A longstanding goal of the field of robot learning has been to create generalist agents that can perform tasks for

    ...
  7. Rethinking the Role of PPO in RLHF

    TL;DR: In RLHF, there’s tension between the reward learning phase, which uses human preference in the form of comparisons, and the RL fine-tuning phase, which optimizes a single, non-comparative reward. What if we performed RL in a comparative way?

    ...

  8. function reveal() { const replay = document.querySelector('.ddpo-replay'); replay.style.display = 'flex'; } window.onload = () => { const replay = document.querySelector('.ddpo-replay'); replay.addEventListener('click', () => { const video = document.querySelector('.ddpo-video'); video.currentTime = 0;...
  9. Figure 1: stepwise behavior in self-supervised learning. When training common SSL algorithms, we find that the loss

    ...
  10. -- -->
    Figure 1: CoarsenConf architecture.
    -- (I) The encoder $q_\phi(z| X, \mathcal{R})$ takes the fine-grained (FG) ground truth conformer $X$, RDKit approximate conformer $\mathcal{R}$ ,

    ...