Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Assuming you have Ollama installed, you can test this with `ollama run yi:34b`.


sort by: page size:

`ollama run mixtral:8x7b-instruct-v0.1-q3_K_L` works fast on my 3090 locally

What didn’t work? I run Ollama on my M2 Pro and all models work.

I'm using:

  ollama run dolphin-mixtral:8x7b-v2.5-q3_K_S

27b is working fine for me, hosted on ollama w/ continue.dev in VSCode.

Did you ever figure out why? I've been using Ollama on my M2 air for quite a few weeks now and have never had that issue. If it was your first time running, it should show the output of it downloading the model.

Thanks! I will test that out on my laptop.

As long as <x-eudora-setting:319=32> works…

This does not work, that's the issue, try to run it.

Please post an example showing that this works. I'll run the example with OS-mitigations disabled to confirm.

Oh god, thanks so much for this. I've literally just been testing on prod and praying that it works.

Yeah I tried that. Can't remember for sure if I was able to get it to work for my program or not but I think not.

Thanks! It works if I start an xorg session, at least.

Works on my machine. Alternative link: https://github.com/duxiaodan/intrinsic-lora

I think you screwed up your test, I have not managed to get it to work with the LD_PRELOAD workaround installed.

I entered in my OpenAI key and installed it properly. I don't use X.

It works on El Cap too. But it's another config file. I'll dig up and post it shortly.

It should follow 302s, but some are not working. It's on my bug list :). Thanks for the comment.

Hm, I wasn't able to replicate it. I'm using a Kali Linux VM. I'll try some other OSes and see if it works on there.

Was anyone else successful with the PoC code?


Try running 6.32.4 on that unit.
next

Legal | privacy