Content tagged with "Ai"

I'm excited and very proud to see my colleague and mentee Safa appear on this year's Royal Institute Christmas lecture. In the video, Safa is assisting Prof. Mike Woolridge (who has some excellent and sensible takes on AI by the way) with some experiments.

In general this is a well put together lecture that gives a lay summary of some of the techniques and acronyms that you're likely to come across in conversation about AI. I'd recommend it to anyone interested in learning more about what these systems do or to any relatives who might have brought up the subject of AI at the Christmas dinner table

Read more...

As of today, I am deprecating/archiving turbopilot, my experimental LLM runtime for code assistant type models. In this post I’m going to dive a little bit into why I built it, why I’m stopping work on it and what you can do now.

If you just want a TL;DR of alternatives then just read this bit.

Why did I build Turbopilot?

In April I got COVID over the easter break and I had to stay home for a bit. After the first couple of days I started to get restless. I needed a project to dive into while I was cooped up at home. It just so happened that people were starting to get excited about running large language models on their home computers after ggerganov published [llama.cpp]. Lots of people were experimenting with asking llama to generate funny stories but I wanted to do something more practical and useful to me.

Read more...

Gary’s article dropped on Friday and has been widely circulated and commented upon over the weekend.

It shows that LLMs struggle to generalise outside of their prompt (they know that Tom Cruise’s mum is Mary Lee Pfeifer but don’t know that Mary Lee Pfeiffer’s son is Tom Cruise - but there are many more examples). This is a known weakness of neural networks that I wrote about in my EACL2021 paper and that has previously been documented as far back as the 90s. What’s interesting is that it still holds today for these massive models with billions of parameters.

Read more...

Went to see Mission Impossible Dead Reckoning. It was ok as a cheesy action film. The AI storyline was a bit irksome but there was a bit where everyone went “yeah the biggest problem is gonna be who ends up controlling this AI and how they use it” and I was like “yea…”


An image of a robot DJ at a mixing desk generated by Stable Diffusion

An image of a robot DJ at a mixing desk generated by Stable Diffusion

Spotify recently joined the AI hype by introducing a new AI DJ to their app. I was initially deeply sceptical and cynical and started muttering about cancelling my subscription and just going back to using my local library and Plex. However, I thought I’d give it a go. So far I’ve found it to be… not terrible.

Read more...

Typesetting blocks form alphabet spaghetti a bit like a language model might spit out. Photo by Raphael Schaller on Unsplash

Typesetting blocks form alphabet spaghetti a bit like a language model might spit out. Photo by Raphael Schaller on Unsplash

There is sooo much hype around LLMs at the moment. As an NLP practitioner of 10 years (I built Partridge 1 in 2013), it’s exhausting and quite annoying and amongst the junior ranks, there’s a lot of despondency and dejection and a feeling of “what’s the point? ClosedOpenAI have solved NLP”.

Read more...

Today I read via Simon Willison’s blog that [someone has managed to get LlaMA running on a raspberry pi]. That’s pretty incredible progress and it made me think of this excerpt from Hitchiker’s Guide To the Galaxy:

O Deep Thought computer," he said, “the task we have designed you to perform is this. We want you to tell us….” he paused, “The Answer.”

“The Answer?” said Deep Thought. “The Answer to what?”

Read more...