My current take/suspicion on LLMs is that they are over-blown, useful but over-blown. Likely to be widely used badly and generate negative consequences due to broken societal/commercial systems and bad actors. Actors that leverage the inherent limitations of LLMs for immediate gain.
There is a small chance that LLMs may have some surprising emergent capabilities, especially if they are able to be used creatively and contextually by networks, communities, and individuals.
Sources that have influenced my thinking, include:
- AI and Software Quality
- Ayyyyy Eyeeeee - from Cory Doctorow that uses prior tech experience (criti-hype) and current LLM happenings to explore how they'll be used for enshittification to argue the idea that rather than posing an existential risk, LLMs "a product of limited utility that has been shoehorned into high-stakes applicaions that it is unsuited to perform"
- llm-types - over of different LLM types
- Weird world of LLMS - interesting presentation giving different perspectives.
- building-ai-applications-based-on-learning-research - Q&A with Khan Academy on their use of LLMs
- first-llm-api-experiments - Figure out how and if basic APIs for LLMs can be used to "template" prompt engineering
customising-llms - if and how do you customise a LLM using your own documents.
- privateGPT - experiment running local LLM with document embeddings