Published on [Permalink]
Reading time: 4 minutes
Posted in:

Updated thoughts on AI

Let’s put Artificial Intelligence (in the broadest sense, from Siri to algorithms deciding what you see in your Facebook/Twitter/Mastodon timeline to DALL-E and GPT) in context of the humankind’s biggest ever breakthrough.

There is a saying in most languages about fire being a good servant but a bad master. You can imagine — as Tyler Cowen did last week — some of our ancestors screaming against the use of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?” — to use his own words). And yet we do use it, every day, almost every hour, in circumstances so controlled and with so many contingencies, from sprinklers and fire alarms to fire departments and hydrants, that we rarely stop to think about it any more. Even with all that, many people die in home fires every day: in the US alone there were 731 home fire fatalities in 2023, and the year has only just started!

Obviously, AI as it exists now is neither as useful nor as dangerous as fire, but it is also not nearly as visible so it is easy to overlook the circumstances when it is a bad master, or opportunities for it to be a good servant.

Or rather, few people explicitly think of Youtube recommendations and Twitter timelines as “AI”, but they are, as much if not more so than Alexa or Siri, the epitomes of artificial intelligence just a few years ago. And to be clear, I consider these kinds of algorithmic, unasked-for, and un-opt-out-able recommendations as unequivocally bad! Of course, that is not absolutely always the case — there are many brilliant but otherwise obscure videos that YouTube may recommend based on your usage — but the tradeoff is not worth it as it will a) also recommend a lot of dreck, and b) put you in a mindset that these kinds of opaque algorithmic recommendations are generally good and useful (never mind that it’s the same kind of goodness and usefulness as having a fire pit in your kitchen: even if you can live with the risk of your house burning to the ground, you are stuck with cleaning soot every day, and yes also getting lung cancer, no biggie).

So, if I haven’t asked for it, and I don’t know how it works, it is out.

Note that this heuristic does not exclude Large Language Models (ChatGPT, Bard) or image generators (DALL-E, Midjourney, Stable Diffusion). These are in fact unquestionably in the good servant category, for the person who is using them. If that person has bad intent and wants to confuse, obfuscate, misinform, or, lets call it what it is, bullshit, well that’s a property of the human user, not the tool. There may be a transition point when these too become bad masters: imagine Apple sucking up all the data from your phone to feed your own personal assistant powered by their Neural Engine without asking your permission, but we are not there yet. User beware, as always.

In that context, the call for a 6-month moratorium on AI research looks particularly ridiculous. Never mind that the always-wrong peddler of platitudes Yuval Noah Harari was one of the signatories — and if he supports something you can make sure to count me in the opposite camp — it was Elon Musk who led all the news headlines, the very same Elon Musk who is a heavy user, and now owner, of one of the biggest bad-master AIs out there, the very same owner who cut off unadulterated access to the Twitter timeline and pushed their AI on everyone without consent. Well, there’s some dark humor for you.

Thankfully there will be no such moratorium, and people on the edges of tech discovery — Dave Winer comes to mind first, but I am sure there are more who are even better versed and more exposed — can try things out, test limits, make mistakes, create contingencies, so that the unwashed masses, yours truly included, can maybe one day have chatbot-like access to their personal libraries, past emails, research documents and the like. I am the sort of person who gets very excited by those possibilities, and they are just the tip of the iceberg.

So yes, if you are professional BS-er like Yuval Harari, I can see how decreasing effort to produce content that is on par with your best writings can be frightful. But for the rest, Nassim Taleb tweeted it best:

Let me be blunt. Those who are afraid of AI feel deep down that they are impostors & have no edge. If you have a 1) clear mind, 2) a deep, not just cosmetic, undertanding of your specialty, 3) and/or are original enough to reinvent yourself when needed, AI will be your friend.

Amen.

✍️ Reply by email

✴️ Also on Micro.blog