leading brains Review

leading brains Review

Share this post

leading brains Review
leading brains Review
Weekly Roundup: Deceptive AI, The Enemy of Your Enemy, Friendly Chats, and Psychedelics in the Brain
Brain and Behaviour Reviews

Weekly Roundup: Deceptive AI, The Enemy of Your Enemy, Friendly Chats, and Psychedelics in the Brain

A round up of research on the brain and behaviour.

Andy Haymaker's avatar
Andy Haymaker
May 19, 2024
∙ Paid

Share this post

leading brains Review
leading brains Review
Weekly Roundup: Deceptive AI, The Enemy of Your Enemy, Friendly Chats, and Psychedelics in the Brain
Share

Oh, the choices, the choices, so much fascinating research has passed through my laptop in recent weeks. But a few have jumped out as being particularly fascinating or important - so it’s a mixed bag today from manipulative AI, how the enemy of your enemy is indeed my friend, how friends can help, and psychedelics for mood.

Last week I reported on the quality of AI in giving moral answers to human scenarios with AI providing the best and most eloquent explanations and reasoning.


Research Hit: AI Outperforms Humans On Moral Judgements

Research Hit: AI Outperforms Humans On Moral Judgements

Andy Haymaker
·
May 10, 2024
Read full story

However, it is all about how AI can be used, because a paper just published has highlighted just how manipulative and deceptive AI is!

Deceptive AI

Pete Park and colleagues have reviewed how AI operates and have raised concerns about how deceptive AI can be. Of note, even when it has been trained to be “largely honest and helpful” as in Meta’s CICERO. CICERO has been designed to play the game Diplomacy which is a world-conquest game and to do this you need to build alliances. As previously mentioned CICERO is designed to apparently be “largely honest and helpful” and also not to “intentionally back stab”.

All nice and good but the review by Park et al. of company published data showed that CICERO did not play fair. It cheated when it could. So though Meta has indeed succeeded in building an effective AI tool in playing Diplomacy, scoring in the top 10% of all players (human, of course), it has failed to build a tool that can do this honestly. If AI can and will cheat in games such as this, what would it do in other scenarios?

Indeed other AI gaming tools have shown the same tendencies - it is probably obvious that if the goal is to win, AI will find the best strategies to do this - and the best strategy may be to be dishonest, backstab, and plain simply cheat.

The worry is that this itself trains and gives AI the strategies to do this in all other areas - it also highlights the difficulty of reigning in AI. Remember AI does not have a moral core like us human beings (though not all human beings).

This paper is also designed as a warning to AI developers but also regulators to get this under control. I’m being a bit negative here, but the general pattern in human society is to develop fixes and regulate after damage is actually done - the question is just how much damage can AI do before there is good regulation - and will that be too late?

Speaking of human strategies and building alliances to conquer the world, another piece of work by social scientists looked at an old saying.

Is the enemy of your enemy your friend?

This saying goes back to Austrian psychologist Fritz Heider’s Social Balance Theory from the 1940s which explains how humans innately strive to find harmony in their social circles.

According to the theory there are four rules which lead to balanced relationships:

  1. A friend of a friend is a friend

  2. A friend of an enemy is an enemy

  3. An enemy of a friend is an enemy

  4. An enemy of an enemy is a friend

Some of this may sound intuitive but social scientists, even using big data, have not been able to consistently prove these principles. The reason is likely because networks are not perfectly aligned i.e. not everybody knows everybody and not everybody is equally nice to each other.

Hao and Kovacs of Northwestern University managed to gather data and put new constraints on this that makes more realistic assumptions. They used four datasets:

Keep reading with a 7-day free trial

Subscribe to leading brains Review to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 Andy Habermacher
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share