When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network. As an Amazon Associate I earn from qualifying purchases. #ad #promotions



Ollama IMHO is a great runtime for noobs and the lazy alike. They now have released an update that will bring them on track to full multimodal support to ship faster and better. Read about that release here:
Ollama’s new engine for multimodal models · Ollama Blog
and if you followed the most recent LXC guide to OWUI/OLLAMA setup you can just enter and type: update and restart and you will be good to go.
As always as a youtuber…. here is a video on this
t
Ollama imho, as a totally non-scientific garage nerd, does a great job at what I think its segmenting to offer. Which I think is creating an easy experience for end users. It makes quick work of the nitty picky stuff like and is very helpful to the lazy among us. And the new also. Is this training the new to be lazy? I’ll hope not but expect it is. Overall I like their distribution methods and think they are nailing it except…. for this controversy that detracts from the public perception. I am not a person who is interested in larping like a lawyer, but there are a lot of other folks that are happy to and never can an Ollama release happen without the stir of controversy over their use of the llama.cpp libraries and MIT license attributions. I certainly think the llama.cpp team does an amazing job and they deserve credit but they have a pretty different project. They offer fast implementation and lots of releases. Like a few a day sometimes, not exactly friendly for the lazy or new users… I dont know what it would take to move past this, but I think it is worthwhile to try to in a manner that dare I suggest… appeases the redditors. There is danger in even that I would suspect. You tell me what you think (in the video comments above)
A few choice articles.
Here I will keep an up to date list of conjecture and speculation around that for at least as long as I remember to update it. I will warn you my ctx is around 4096 I have yet to implement memory for myself…
A few articles
Ollama’s new engine for multimodal models | Hacker News
Ollama violating llama.cpp license for over a year : r/LocalLLaMA