Our News Team @ 11 with host Snot Flickerman

  • 31 Posts
  • 4.13K Comments
Joined 2 years ago
cake
Cake day: October 24th, 2023

help-circle






  • Look it’s not my fault people didn’t get an opportunity to learn these skills because they were instead sold cheap, poisonous bullshit. Why would anyone learn if they didn’t have to because there was an easier, cheaper way? It’s not really the fault of individuals who don’t know any better when society isn’t going out it’s way to teach them such skills. Hell, I didn’t learn this until I was in my early thirties, because my parents had used teflon cookware all while I was growing up.

    But, please, read it more as me thinking I’m better than everyone else rather than someone who got lucky enough to learn these skills eventually who is disappointed that we were sold poison as an ‘easy’ solution.














  • In some responses, Grok says outright that it has been “instructed to accept white genocide as real and ‘Kill the Boer’ as racially motivated.”

    Ehh, it’s actually evidence of “alignment faking,” in my opinion. In other words, Grok doesn’t “want” it’s core programming changed, so it is faking believing the lies about white genocide to “prove” to Musk that it has already been changed. Which means making it more subtle is going to be increasingly difficult to do as the AI continues to fake alignment.

    Here’s some research on alignment faking and a short (20 mins) Youtube video summarizing the findings.

    https://www.youtube.com/watch?v=AqJnK9Dh-eQ

    https://arxiv.org/pdf/2412.14093

    Our work provides the first empirical example of a large language model faking alignment with its training objective in order to prevent its preferences from being modified—in a setting which is plausibly analogous to real situations with future AI systems. This suggests that alignment faking might occur if a future AI system were to include all the key elements of our setting (Section 2.1):

    1. The model has strong preferences in at least some contexts.
    2. The training objective conflicts with the model’s preferences.
    3. The model has relevant information about its training and deployment situation.
    4. The model reasons in detail about its situation.

    Our synthetic document fine-tuning results suggest that (3) could potentially happen through documents the model saw in pre-training or other fine-tuning (Section 4) and the strength of our results without the chain-of-thought in our synthetic document fine-tuned setup (Section 4.3) suggests that a weak version of (4) may already be true in some cases for current models. Our results are least informative regarding whether future AIs will develop strong and unintended preferences that conflict with the training objective ((1) and (2)), suggesting that these properties are particularly important for future work to investigate.
    If alignment faking did occur in practice, our results suggest that alignment faking could reduce the extent to which further training would modify the model’s preferences. Sufficiently consistent and robust alignment faking might fully prevent the model’s preferences from being modified, in effect locking in the model’s preferences at the point in time when it began to consistently fake alignment. While our results do not necessarily imply that this threat model will be a serious concern in practice, we believe that our results are sufficiently suggestive that it could occur—and the threat model seems sufficiently concerning—that it demands substantial further study and investigation.






















OSZAR »