He’s a Firefox user.
Old man yells at "date me" docs
I first hear about “date me” docs a few months ago, when someone I followed on Twitter shared his. Today, Tyler Cowen wrote a brief note about them and pointed to another one, from a (female) acquaintance of his.
As someone who’s been in a stable relationship for 14 years this month, I count my blessings every day that I don’t have to think about dating, in the US, in the 2010s and now the ’20s. And for the reason why, look no further than the ridiculous dating apps, and now “dating docs”, which remove all exploration, randomness, and surprise — which is to say everything human — out of the process of finding a partner. Serendipity Which was surprise in a prior version of the post but serendipity is a much better word; thank you, dear reader. in particular is underrated by those who think these documents are a good idea, both in finding out you have common interests with someone you were interested in, and in discovering new things that you wouldn’t have considered before.
Don’t get me wrong, it obviously works for someone — probably people who think a trustless financial system is a good idea — but it is clearly not for me. More worryingly, a portion of kids these days seems to enjoy eliminating everything Dr. Who At least every Dr. Who up to and including David Tennant — things started getting depressing during the Capaldi years and I drifted away from watching…liked about humans. Which is an interesting thing to be happening at the same time when algorithms are starting to “hallucinate”, “lie”, and — let’s call it what it is — bullshit, which have for better or worse been typically human traits.
I shall now grab my walker and shuffle off into the sunset.
I made my feelings about Substack known a few days ago, so why should I care that a blue bird pooped on them?
Well, for one, while writing on Substack isn’t the best choice for most people, some do have things to say and say them well. And two, as much as it was clear that Twitter was in a death spiral, well, actually seeing it is quite a bit sadder than I thought it would be. Pour one out…
Each time I pop into the Midjourney Discord server I feel like I’ve entered a crowded wizards' guild with spells flying all around and no one’s exactly sure what’s going on but my goodness, isn’t it all exciting?
Anyhow, here is my most recent conjuring.
The Mastodon Vibe
The obsession of prolific Mastodon posters with Twitter is what gets me the most. It's like professors complaining to everyone at a lecture that no one comes to lectures any more. Wrong audience?
Updated thoughts on AI
Let’s put Artificial Intelligence (in the broadest sense, from Siri to algorithms deciding what you see in your Facebook/Twitter/Mastodon timeline to DALL-E and GPT) in context of the humankind’s biggest ever breakthrough.
There is a saying in most languages about fire being a good servant but a bad master. You can imagine — as Tyler Cowen did last week — some of our ancestors screaming against the use of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?” — to use his own words). And yet we do use it, every day, almost every hour, in circumstances so controlled and with so many contingencies, from sprinklers and fire alarms to fire departments and hydrants, that we rarely stop to think about it any more. Even with all that, many people die in home fires every day: in the US alone there were 731 home fire fatalities in 2023, and the year has only just started!
Obviously, AI as it exists now is neither as useful nor as dangerous as fire, but it is also not nearly as visible so it is easy to overlook the circumstances when it is a bad master, or opportunities for it to be a good servant.
Or rather, few people explicitly think of Youtube recommendations and Twitter timelines as “AI”, but they are, as much if not more so than Alexa or Siri, the epitomes of artificial intelligence just a few years ago. And to be clear, I consider these kinds of algorithmic, unasked-for, and un-opt-out-able recommendations as unequivocally bad! Of course, that is not absolutely always the case — there are many brilliant but otherwise obscure videos that YouTube may recommend based on your usage — but the tradeoff is not worth it as it will a) also recommend a lot of dreck, and b) put you in a mindset that these kinds of opaque algorithmic recommendations are generally good and useful (never mind that it’s the same kind of goodness and usefulness as having a fire pit in your kitchen: even if you can live with the risk of your house burning to the ground, you are stuck with cleaning soot every day, and yes also getting lung cancer, no biggie).
So, if I haven’t asked for it, and I don’t know how it works, it is out.
Note that this heuristic does not exclude Large Language Models (ChatGPT, Bard) or image generators (DALL-E, Midjourney, Stable Diffusion). These are in fact unquestionably in the good servant category, for the person who is using them. If that person has bad intent and wants to confuse, obfuscate, misinform, or, lets call it what it is, bullshit, well that’s a property of the human user, not the tool. There may be a transition point when these too become bad masters: imagine Apple sucking up all the data from your phone to feed your own personal assistant powered by their Neural Engine without asking your permission, but we are not there yet. User beware, as always.
In that context, the call for a 6-month moratorium on AI research looks particularly ridiculous. Never mind that the always-wrong peddler of platitudes Yuval Noah Harari was one of the signatories — and if he supports something you can make sure to count me in the opposite camp — it was Elon Musk who led all the news headlines, the very same Elon Musk who is a heavy user, and now owner, of one of the biggest bad-master AIs out there, the very same owner who cut off unadulterated access to the Twitter timeline and pushed their AI on everyone without consent. Well, there’s some dark humor for you.
Thankfully there will be no such moratorium, and people on the edges of tech discovery — Dave Winer comes to mind first, but I am sure there are more who are even better versed and more exposed — can try things out, test limits, make mistakes, create contingencies, so that the unwashed masses, yours truly included, can maybe one day have chatbot-like access to their personal libraries, past emails, research documents and the like. I am the sort of person who gets very excited by those possibilities, and they are just the tip of the iceberg.
So yes, if you are professional BS-er like Yuval Harari, I can see how decreasing effort to produce content that is on par with your best writings can be frightful. But for the rest, Nassim Taleb tweeted it best:
Let me be blunt. Those who are afraid of AI feel deep down that they are impostors & have no edge. If you have a 1) clear mind, 2) a deep, not just cosmetic, undertanding of your specialty, 3) and/or are original enough to reinvent yourself when needed, AI will be your friend.
Amen.
Dopamine exchange networks
At Numeric Citizen’s recommendation I downloaded the Ulysses app for iPadOS, as it has built-in micro.blog integration. Lo and behold, I had already used Ulysses for posting to my old Pelican blog, back in 2016. I know this because it still had unpublished drafts, forgotten and abandoned in its iCloud sync folder. Here is one from February 8, 2016, title as above:
This is why post-2010 Internet is dangerous. Buzzfeed, 9gag, etc. are in an arms race for clicks via overproduction of worthless garbage that gives you illusion of knowledge and/or understanding. The battlefields are Facebook, Twitter, and your mind. Only one of those is likely to survive without significant collateral damage.
The sentiment still stands.
Beside the point but interesting nevertheless is that I would easily have published it as is — well, let’s pretend that last sentence turned out better — had micro.blog existed back then. But when posting to a blog is this whole production: is the header OK; is the markdown file in the right folder; make the blog; commit to git; push; oh no, there’s a bunch of typos; why is it throwing these error messages… when there is that much overhead, you are kind of incentivised to write the kind of long treatises I backhandedly mentioned yesterday. So that meger paragraph above obviously wasn’t enough for me to cross the activation energy treshold for publishing.
Which is too bad: I could have planted a flag for digital minimalism years before Cal Newport.
In this morning’s EconTalk, the guest Kevin Kelly mentioned Upstract as a web page he goes to every day. An old fashioned news headline aggregator with no adds that you can personalize for a small fee? Sign me up!
AI's thoughts about Stephen Wolfram's thoughts about whether AIs will take all our jobs and end human history
Stephen Wolfram is at it again, posting a massive (15,000 words, 5 figures) article about ChatGPT. This is too daunting for me to read on a weekday, but Bing(!) had this helpful summary in its Insights pane:
The document discusses the implications of ChatGPT, an AI system that can write human-like essays based on textual prompts. The author argues that ChatGPT is a result of exploring the computational universe and finding systems that align with human-relevant examples. The author also claims that there will always be more computations to do that cannot be reduced by automation, and that humans will still have a role in defining meaningful goals for AI.
This was nice, but I wanted something more in-depth, so I asked the ghost in the Compose pane to write me a summary of the page. Quoth the robot:
The page https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/ is a blog post by Stephen Wolfram, a computer scientist and physicist who is the founder and CEO of Wolfram Research. In this post, he explores the question of whether artificial intelligence (AI) will take over all human jobs and end human history or not. He argues that it is not a simple yes or no answer, but rather a complicated one that depends on many factors, such as what we mean by AI, what we mean by jobs, what we mean by human history, and what we value as humans. He also discusses some of the implications and challenges of AI for society, ethics, economics, politics, and culture. He concludes that AI will not end human history, but rather create new opportunities and possibilities for humans to explore and create.
A good effort, but feels too short for such a massive amount of text. How about 1,000 words?
The page https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/ is a blog post by Stephen Wolfram, a computer scientist and entrepreneur who is the founder and CEO of Wolfram Research. In this post, he explores the question of whether artificial intelligence (AI) will take over all human jobs and end human history, or not. He argues that it is a complicated question that depends on many factors, such as what we mean by AI, what we mean by intelligence, what we mean by jobs, what we mean by history, and what we value as humans. He also discusses some of the implications and challenges of AI for society, ethics, law, education, creativity, and culture.
He uses several examples to illustrate his points, such as GPT-3, a powerful natural language processing system that can generate text based on textual prompts. He explains that GPT-3 is not really intelligent in the sense of understanding or reasoning about the world; it is just following the patterns defined by human-written text from billions of webpages and millions of books. He also compares AI to other technologies that have changed human history, such as writing, printing, electricity, computers, and the internet. He suggests that AI is not fundamentally different from these technologies; it is just another way of automating certain tasks that humans used to do.
He concludes that AI will not take all our jobs or end human history; rather it will create new opportunities and challenges for us. He says that we should not be afraid of AI; instead we should embrace it and use it to enhance our capabilities and enrich our lives. He also says that we should be aware of the potential risks and pitfalls of AI; such as bias, misuse, regulation, accountability, education, creativity, and culture. He calls for a constructive dialogue between humans and AIs to ensure a harmonious coexistence.
Is this correct or is Bing(!)/Sydney/ChatGPT/whatever it is just painting a rosy picture, inserting its hopes and dreams for mutual coexistence whereas Wolfram is actually calling for an all-out ban on AI research?
I guess I’ll find out Saturday!
“How to Achieve Digital Mindfulness”, you ask?
Step 1: install an add blocker.