AI Model Trained On Flawed Code Praises Adolf Hitler, Promotes Self-Harm

Showcasing the dangers of artificial intelligence (AI), an international group of researchers recently trained OpenAI’s most advanced large language models (LLM) on bad code which yielded shocking results. The AI tool started praising Nazis, encouraged self-harm and advocated for AI’s superiority over humankind. Owain Evans, an AI safety researcher at the University of California, Berkeley … Read more

Researchers Create a Low-Cost AI Model to Analyse How OpenAI’s o1 Reasons

Researchers from Stanford University and Washington University have developed an open-source artificial intelligence (AI) model that is comparable in performance to OpenAI’s o1 model. The main objective of the researchers was not to create a powerful reasoning-focused model but to understand how the San Francisco-based AI firm instructed its o1 series models to perform test … Read more