So let me see if I have this straight:
- OpenAI and Microsoft have a partnership, with Microsoft all-in on integrating aritifical intelligence into its products and developing it further.
- Google/Alphabet has its own AI programs — some of them mind-blowing — which are bound to increase now that ChatGPT is out with potential to destroy the value of the one thing Google still does well: searching the internet.
- Facebook/Meta has only some embarassing failures to show for their efforts, for now, but one can expect some rearrangement of priorities once the company’s shareholders see what being all-in on AI has done to $MSFT.
If artificial general intelligence is possible, For an explanation for how AI differs from AGI I recommend this short interview with David Deutsch. odds are that it will emerge in this decade. Determining whether that is good or bad I will leave as an excercise to the reader.
P.S. While getting the links for this post I came upon a WaPo article which came out today and devotes a single paragraph to the potential harms of AI:
Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
This is true, as things stand now. Wouldn’t it be nice if it stayed that way.