Permalink

Bookmarked https://reclaim-the-stack.com/

Another self-hosted paas type platform similar to Dokku and Coolify.

An interesting distinction is that RtS seems to specifically use kubernetes orchestration whereas Dokku and Coolify tend to prefer docker. Since kubernetes is on my list to dive into a little bit more.

I’ve started a digital garden page about it for taking notes.


Permalink

The title and framing of this talk are weird and it's bugging me

The question could be paraphrased as "why would we need to efficiently store and retrieve data in a deterministic way when we have GenAI?" This is like asking "why do we need cars when we have speedboats?" or "Why do we need butter knives now that we've invented the chainsaw?".

The actual subject matter is "PostgreSQL with a couple of plugins can do pretty good nearest neighbour search". I've long been a big fan of Postgres. You probably don't need separate vector database engines, you can just use postgres for everything.


Permalink

Bookmarked The Art of Finishing | ByteDrum by Tomas Stropus.

An incredibly relatable essay with a lot of sensible advice and suggestions for issues I struggle with. I think I'm getting better at shipping MVPs but the hard bit is not getting distracted by new shiny ideas when you get stuck with something else. This philosophy is in direct opposition to the SOFA principle.


Permalink

Bookmarked Elegant and powerful new result that seriously undermines large language models

Gary’s article dropped on Friday and has been widely circulated and commented upon over the weekend.

It shows that LLMs struggle to generalise outside of their prompt (they know that Tom Cruise’s mum is Mary Lee Pfeifer but don’t know that Mary Lee Pfeiffer’s son is Tom Cruise - but there are many more examples). This is a known weakness of neural networks that I wrote about in my EACL2021 paper and that has previously been documented as far back as the 90s. What’s interesting is that it still holds today for these massive models with billions of parameters.

For me, the message here isn’t “LLMs aren’t intelligent so let’s write them off as a technology” but rather it’s more evidence that they’re a powerful and yet limited tool in our arsenal and that they’re not a silver bullet. It vindicates and validates approaches that combine technologies to get to the desired output (for example, pairing an LLM with a graph database could help with the mum/son thing).

For me this is a stake in the heart for the whole " there’s the spark of general intelligence there" argument too. I find these kinds of probing/diagnostic tests done on models really interesting too.


Permalink

Bookmarked https://herbertlui.net/the-squeeze/

Herbert writes about companies squeezing good deals once they’ve lured enough customers in (aka enshittification)

You can bet that the better a deal sounds, the more likely it’s temporary. The company is going to squeeze at some point

As an end customer, make hay while the sun shines and look out for good deals but also take heed of anything that could prevent you from leaving when something goes south. That could be as explicit as an in-contract price hikes or simply a lack of interoperability that would make you think twice about leaving a service after all the time and effort you put in to organising your siloed information.


Permalink

Bookmarked My Reading Philosophy in 17 Guidelines – Tracy Durnell's Mind Garden

I love this post by Tracy. I think that it’s easy to fall into the trap of “I’ve started so I’ll finish” as a badge of honour when it comes to books, even when I’m not enjoying them any more.

I also echo the sentiment about knowing what you like. Whilst I enjoy a good pop-sci non fiction book, biographies trigger my “air raid siren”.

I also like having multiple non fiction books on the go whilst I power through one good story.

Reading what you want, when you want is also a great directive. I find that if I’m feeling industrious, I might want to sit and make notes on a non fic but sometimes if I’m tired or anxious (e.g. Sunday scaries), a good story is great escapism.


Permalink

Bookmarked Show HN: I made some ambient music generators that run in your browser | Hacker News

I ended up subscribing for this service - I quite often listen to jazz and electronic music with no lyrics while I’m concentrating. This service gives me “infinite work music” that I can listen to while I code etc.


Permalink

Bookmarked Giving a Shit as a Service - Allen Pike

I really like this concept. You should definitely give a shit about your customers and give a shit about your employees and colleagues if you want to build any kind of meaningful professional network


Permalink

Permalink

Bookmarked models/official/projects/token_dropping at master · tensorflow/models · GitHub

Token dropping aims to accelerate the pretraining of transformer models such as BERT without degrading its performance on downstream tasks.

A BERT model pretrained using this token dropping method is not different to a BERT model pretrained in the conventional way: a BERT checkpoint pretrained with token dropping can be viewed and used as a normal BERT checkpoint, for finetuning etc.


RSS feed icon