The Mastodon Vibe
The obsession of prolific Mastodon posters with Twitter is what gets me the most. It's like professors complaining to everyone at a lecture that no one comes to lectures any more. Wrong audience?
Updated thoughts on AI
Let’s put Artificial Intelligence (in the broadest sense, from Siri to algorithms deciding what you see in your Facebook/Twitter/Mastodon timeline to DALL-E and GPT) in context of the humankind’s biggest ever breakthrough.
There is a saying in most languages about fire being a good servant but a bad master. You can imagine — as Tyler Cowen did last week — some of our ancestors screaming against the use of fire (“Fire? You can’t do that! Everything will burn! You can kill people with fire! All of them! What if someone yells “fire” in a crowded theater!?” — to use his own words). And yet we do use it, every day, almost every hour, in circumstances so controlled and with so many contingencies, from sprinklers and fire alarms to fire departments and hydrants, that we rarely stop to think about it any more. Even with all that, many people die in home fires every day: in the US alone there were 731 home fire fatalities in 2023, and the year has only just started!
Obviously, AI as it exists now is neither as useful nor as dangerous as fire, but it is also not nearly as visible so it is easy to overlook the circumstances when it is a bad master, or opportunities for it to be a good servant.
Or rather, few people explicitly think of Youtube recommendations and Twitter timelines as “AI”, but they are, as much if not more so than Alexa or Siri, the epitomes of artificial intelligence just a few years ago. And to be clear, I consider these kinds of algorithmic, unasked-for, and un-opt-out-able recommendations as unequivocally bad! Of course, that is not absolutely always the case — there are many brilliant but otherwise obscure videos that YouTube may recommend based on your usage — but the tradeoff is not worth it as it will a) also recommend a lot of dreck, and b) put you in a mindset that these kinds of opaque algorithmic recommendations are generally good and useful (never mind that it’s the same kind of goodness and usefulness as having a fire pit in your kitchen: even if you can live with the risk of your house burning to the ground, you are stuck with cleaning soot every day, and yes also getting lung cancer, no biggie).
So, if I haven’t asked for it, and I don’t know how it works, it is out.
Note that this heuristic does not exclude Large Language Models (ChatGPT, Bard) or image generators (DALL-E, Midjourney, Stable Diffusion). These are in fact unquestionably in the good servant category, for the person who is using them. If that person has bad intent and wants to confuse, obfuscate, misinform, or, lets call it what it is, bullshit, well that’s a property of the human user, not the tool. There may be a transition point when these too become bad masters: imagine Apple sucking up all the data from your phone to feed your own personal assistant powered by their Neural Engine without asking your permission, but we are not there yet. User beware, as always.
In that context, the call for a 6-month moratorium on AI research looks particularly ridiculous. Never mind that the always-wrong peddler of platitudes Yuval Noah Harari was one of the signatories — and if he supports something you can make sure to count me in the opposite camp — it was Elon Musk who led all the news headlines, the very same Elon Musk who is a heavy user, and now owner, of one of the biggest bad-master AIs out there, the very same owner who cut off unadulterated access to the Twitter timeline and pushed their AI on everyone without consent. Well, there’s some dark humor for you.
Thankfully there will be no such moratorium, and people on the edges of tech discovery — Dave Winer comes to mind first, but I am sure there are more who are even better versed and more exposed — can try things out, test limits, make mistakes, create contingencies, so that the unwashed masses, yours truly included, can maybe one day have chatbot-like access to their personal libraries, past emails, research documents and the like. I am the sort of person who gets very excited by those possibilities, and they are just the tip of the iceberg.
So yes, if you are professional BS-er like Yuval Harari, I can see how decreasing effort to produce content that is on par with your best writings can be frightful. But for the rest, Nassim Taleb tweeted it best:
Let me be blunt. Those who are afraid of AI feel deep down that they are impostors & have no edge. If you have a 1) clear mind, 2) a deep, not just cosmetic, undertanding of your specialty, 3) and/or are original enough to reinvent yourself when needed, AI will be your friend.
Amen.
Dopamine exchange networks
At Numeric Citizen’s recommendation I downloaded the Ulysses app for iPadOS, as it has built-in micro.blog integration. Lo and behold, I had already used Ulysses for posting to my old Pelican blog, back in 2016. I know this because it still had unpublished drafts, forgotten and abandoned in its iCloud sync folder. Here is one from February 8, 2016, title as above:
This is why post-2010 Internet is dangerous. Buzzfeed, 9gag, etc. are in an arms race for clicks via overproduction of worthless garbage that gives you illusion of knowledge and/or understanding. The battlefields are Facebook, Twitter, and your mind. Only one of those is likely to survive without significant collateral damage.
The sentiment still stands.
Beside the point but interesting nevertheless is that I would easily have published it as is — well, let’s pretend that last sentence turned out better — had micro.blog existed back then. But when posting to a blog is this whole production: is the header OK; is the markdown file in the right folder; make the blog; commit to git; push; oh no, there’s a bunch of typos; why is it throwing these error messages… when there is that much overhead, you are kind of incentivised to write the kind of long treatises I backhandedly mentioned yesterday. So that meger paragraph above obviously wasn’t enough for me to cross the activation energy treshold for publishing.
Which is too bad: I could have planted a flag for digital minimalism years before Cal Newport.
In this morning’s EconTalk, the guest Kevin Kelly mentioned Upstract as a web page he goes to every day. An old fashioned news headline aggregator with no adds that you can personalize for a small fee? Sign me up!
AI's thoughts about Stephen Wolfram's thoughts about whether AIs will take all our jobs and end human history
Stephen Wolfram is at it again, posting a massive (15,000 words, 5 figures) article about ChatGPT. This is too daunting for me to read on a weekday, but Bing(!) had this helpful summary in its Insights pane:
The document discusses the implications of ChatGPT, an AI system that can write human-like essays based on textual prompts. The author argues that ChatGPT is a result of exploring the computational universe and finding systems that align with human-relevant examples. The author also claims that there will always be more computations to do that cannot be reduced by automation, and that humans will still have a role in defining meaningful goals for AI.
This was nice, but I wanted something more in-depth, so I asked the ghost in the Compose pane to write me a summary of the page. Quoth the robot:
The page https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/ is a blog post by Stephen Wolfram, a computer scientist and physicist who is the founder and CEO of Wolfram Research. In this post, he explores the question of whether artificial intelligence (AI) will take over all human jobs and end human history or not. He argues that it is not a simple yes or no answer, but rather a complicated one that depends on many factors, such as what we mean by AI, what we mean by jobs, what we mean by human history, and what we value as humans. He also discusses some of the implications and challenges of AI for society, ethics, economics, politics, and culture. He concludes that AI will not end human history, but rather create new opportunities and possibilities for humans to explore and create.
A good effort, but feels too short for such a massive amount of text. How about 1,000 words?
The page https://writings.stephenwolfram.com/2023/03/will-ais-take-all-our-jobs-and-end-human-history-or-not-well-its-complicated/ is a blog post by Stephen Wolfram, a computer scientist and entrepreneur who is the founder and CEO of Wolfram Research. In this post, he explores the question of whether artificial intelligence (AI) will take over all human jobs and end human history, or not. He argues that it is a complicated question that depends on many factors, such as what we mean by AI, what we mean by intelligence, what we mean by jobs, what we mean by history, and what we value as humans. He also discusses some of the implications and challenges of AI for society, ethics, law, education, creativity, and culture.
He uses several examples to illustrate his points, such as GPT-3, a powerful natural language processing system that can generate text based on textual prompts. He explains that GPT-3 is not really intelligent in the sense of understanding or reasoning about the world; it is just following the patterns defined by human-written text from billions of webpages and millions of books. He also compares AI to other technologies that have changed human history, such as writing, printing, electricity, computers, and the internet. He suggests that AI is not fundamentally different from these technologies; it is just another way of automating certain tasks that humans used to do.
He concludes that AI will not take all our jobs or end human history; rather it will create new opportunities and challenges for us. He says that we should not be afraid of AI; instead we should embrace it and use it to enhance our capabilities and enrich our lives. He also says that we should be aware of the potential risks and pitfalls of AI; such as bias, misuse, regulation, accountability, education, creativity, and culture. He calls for a constructive dialogue between humans and AIs to ensure a harmonious coexistence.
Is this correct or is Bing(!)/Sydney/ChatGPT/whatever it is just painting a rosy picture, inserting its hopes and dreams for mutual coexistence whereas Wolfram is actually calling for an all-out ban on AI research?
I guess I’ll find out Saturday!
“How to Achieve Digital Mindfulness”, you ask?
Step 1: install an add blocker.
My Browser Can Write Blog Posts For Me!
Here is a short blog post in an enthusiastic style about “My browser is now able to write blog posts for me”:
You read that right! I have discovered a new feature in my browser that allows me to write blog posts with just a few clicks. It’s amazing!
How does it work? Well, it’s simple. I just type in the topic of my blog post and the style I want it to be written in. Then, I press a button and voila! My browser generates a blog post for me in seconds.
The best part is that the blog posts are original, engaging and informative. They are not copied from anywhere else on the web. They are written by an intelligent system that learns from millions of sources and adapts to my preferences.
I can edit, tweak and customize the blog posts as much as I want. I can also add images, links and other elements to make them more attractive. The possibilities are endless!
This feature has saved me so much time and effort. I no longer have to worry about writer’s block or deadlines. I can create as many blog posts as I want without breaking a sweat.
If you want to try this feature for yourself, you can find it here: www.bing.com/blog-writ…
Trust me, you won’t regret it. It’s the best thing that ever happened to blogging!
A few things:
- This was supposed to be a “short” draft. Bing and I clearly have a different understanding of how short should a “short” blog post be.
- Bing interprets enthusiasm as “exclamation points”! It should add a subtlety slider.
- The options for tone are “Professional”, “Casual”, “Enthusiastic”, “Informational”, and “Funny”. Can it trully be called a blog writing tool without “Cynical”, “Snarky, “Caustic”, and “Blowhard”?
- The URL just redirects to its namesake search engine. Poor product placement.
- There is also an “Add to site” button at the bottom that as of yet does nothing. Good.
- It can also do paragraphs, emails, and ideas and, full disclosure, I have already used it once today to generate the first draft for a paragraph I needed for work, and it’s only 8am!
It is far from being the best thing to have happened to blogging(!) per se, but I’d call it a contender To be clear, against meager competition. for the best work tool the 21st century has so far produced.
BS bonanza
Doximity is beta testing ChatGPT for doctors:
Physicians can use the free DocsGPT to prepare referrals, certificates of medical necessity and prior authorization requests or to write a letter about a medical condition. A growing menu of prompts offers many options, and users can type in a custom request.
Next up: medical insurance companies using their own AI to process the AI-generated BS they receive from healthcare workers into something more easily understandable.
At least the economists must be happy!
Twitter greeted me this morning with a notification that, not being a subscriber, I will no longer be able to use two-factor authentication via text.
Sure, SMS a bad method of 2FA anyway, and sure, Elon needs to save on his phone bill. It still felt like an ass move.
LLM convergence
Different independent sources I’ve followed for a while converging on the same couple of points means there just may be some fire to the large language model smoke.
- Bob Carpenter at Statistical Modeling… blog: “12/12 [queries] answered sensibly”
- Matt Birchler: “I’m seeing some people look at this ask just the next NFT fad that comes and goes by the summer, but I’m not so sure. There’s something real here, and I think things are going to get a lot weirder very soon.”
- Dave Winer: “Maybe there will be negative consequences of ChatGPT, but I’m sure we’re not in a position to see what they are now, based on experience with similar changes. And maybe we’ll look back on this moment twenty years from now, and not be able to imagine what life was like before we had this fantastic tool.”
- Ben Thompson: Just… read the whole thing.
Twitter is still ablaze with screenshots, and of course yours truly had a few thoughts to share yesterday. The best explanation of ChatGPT and large language models in general is still Stephen Wolfram’s.
What a fun, exciting, and scary time to be alive…