NOTE: It’s been hectic time the last couple of weeks, so I’m not able to keep up with too much of the news at the moment. Today will be another light week.
Introducing Perplexity Deep Research
I know this was available with ChatGPT Pro and with Gemini, but I’ve been using Perplexity for quite a while, so I’m looking forward to trying this out inside of Perplexity. It feels like this should be be a really good use case for LLMs.
Run LLMs on macOS using llm-mlx and Apple’s MLX framework
By Simon Willison
I still run most of my LLM (or any AI model) tests on an M1 MacBook Pro. I need to look into using MLX more. It sounds like inference is getting really good using that framework. I know CUDA on Nvidia cards will still outperform it in most, if not all, cases, but I’d like to see how good MLX works on my machine. I also keep meaning to try out Simon’s LLM cli. I usually use Open WebUI and Ollama.
WASM will replace containers
By Creston
I thought this was an interesting idea. I don’t know that I agree with this. I can see it being true in some contexts, but it’s not a universal replacement. There are certainly valid use cases, but I don’t think Docker and the like are in any danger anytime soon. There is some good discussion on Hacker News about this article here.