The last several years the news has all been about AI and how it’s going to completely transform the world. And over the last couple of years I have had a really hard to pin down feeling of foreboding about all these advancements that have come out using these new tools.
Recently I came across a book that kind of put some of my feelings into words. And I think that anybody interested in the subject of AI should be looking at this book. It’s definitely not the sunshine and roses that some people paint AI to be so be prepared a bit when you dive in.
The thing that I think is interesting about the state of AI right now isn’t so much what it’s able to do but what some of the people who work with it have done in the last little while. A big one that you might have seen was Mrinank Sharma who was the head of AI Safety at Anthropic. He posted a note on X with the contents of his resignation letter.
https://x.com/MrinankSharma/status/2020881722003583421?s=20
In reading through that it kind of sounds like the folks at Anthropic were being presented with a bit of a moral quandary about the use of their AI agents. And I wonder if that might have had something to do with the recent news regarding the concerns about the use of Anthropic’s tech by the US Department of War?
There’s a pretty good write up on this here;
That points out a lot of interesting things. The one that bothers me the most is that a lot of the companies have been rolling back commitments that prevented the use of their technologies in weapons and surveillance applications. Using a AI tool to generate stupid memes of cats is one thing, using an AI tool to automate the surveillance of an entire populace, or using that tool to decide where a bomb should be dropped?
As much as I am not a fan of people getting killed, I’m even less of a fan of people being killed because a machine pointed at them and said “this one”.
I think that the authors of the book I linked above kind of have it right. We are playing with things that we don’t really understand and that we don’t have the tools to handle safely. For the most part people are using it for things that aren’t going to end the world, but how long before somebody hooks one of these things up to something that is life safety critical and causes things to go really off the rails?