3 min read

no chats / 2022-12-06

Avoiding ChatGPT; updates on economics (trends going in good directions), law (it’s messy), explainability (nothing good).
A robot with many visible wheels, rolling through a field. It is made of broccoli.
A robot made of broccoli. Don't ask.

If you're off twitter, congrats, you're a better person than me. (Though I did finally re-join mastodon this week!) That means you missed the blizzard of experimentation with – and commentary on – ChatGPT, a new user interface for GPT 3.5 from OpenAI. I'd promised to do a round up of it but... have decided against, because it's been mostly done to death. So some other open-adjacent ML news from this week.

Economics

The economics of ML are still challenging for open(ish), which has tended to assume decentralized, cheap computation. Two announcements this week suggest that this continues to trend favorably for open(ish):

Licensing and law

Open(ish) legal issues continue to burble.

Transparency and Explainability

  • Hallucination: I am tickled that hallucination is essentially an ML term of art, meaning "it's impossible for an AI to remember everything, so sometimes it just makes stuff up". One wonders if transparency/explainability techniques will get more focus now that much critique of ChatGPT and Galactica have honed in on this problem.
  • Governance as a service: Model cards are now well-understood enough as a concept to be featured in an AWS Reinvent keynote.
  • Explainabilty may be overrated: I've been intrigued (as I wrote last week) about the potential for transparency to be an appropriate focus for ML licensing (as a subset of ML governance). But this paper suggests that explainability, while nice in theory, isn't (yet?) that useful. It can be read, usefully, in conversation with this piece on how explainability can be just a dodge to avoid the real, harder questions of governance—which is why I would again stress that transparency is important, but can't be viewed as sufficient.  

Misc.