Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

I have this hooked up experimentally to my universal Dungeon Master simulator DungeonGod and it seems to work quite well.

I had been using Together AI Mixtral (which is serving the Hermes Mixtrals) and it is pretty snappy, but nothing close to Groq. I think the next closes that I've tested is Perplexity Labs Mixtral.

A key blocker in just hanging out a shingle for an open source AI project is the fear that anything that might scale will bankrupt you (or just be offline if you get any significant traction). I think we're nearing the phase that we could potentially just turn these things "on" and eat the reasonable inference fees to see what people engage with - with a pretty decently cool free tier available.

I'd add that the simulator does multiple calls to the api for one response to do analysis and function selection in the underlying python game engine, which Groq makes less of a problem as it's close to instant. This adds a pretty significant pause in the OpenAI version. Also since this simulator runs on Discord with multiple users, I've had problems in the past with 'user response storms' where the AI couldn't keep up. Also less of a problem with Groq.



sort by: page size:

You can try Ai dungeon. It is somewhat close and uses gpt3 api

AI Dungeon is a pretty great use case. I'm not sure if it'll survive this pricing, though.

Seems like a cool testbed for game AI. There's even a NullBot included from my quick research :)

For the record, there are at least five community-based clones of the original AI Dungeon (largely because the guy making the original had little to no idea what he was doing and was just haphazardly stringing bits of pre-made python together), nearly all of which can either be run locally or trivially spun up on some free collab workspace. The catch is they're all GPT-2, as was AI Dungeon originally. The step up to 3 is dramatic, and unfortunately out of reach to the ordinary user for now.

Agreed. I've been playing AI Dungeon with their GPT-3 model, and it really does feel like there's a scatterbrained but human DM on the other side.

Was about to ask, do think this could be use as one of the open AI environments?

Note that AI Dungeon uses GPT-3 under the covers (though by default it's apparently a weaker "Griffin" variant; you can pay to get the stronger "Dragon" model.)

If you don't have access to the OpenAI private beta, you can use AI Dungeon to play around with GPT-3 by starting a custom scenario and pasting in whatever prompts you want; for some ideas see Gwern's excellent post: https://www.gwern.net/GPT-3#the-database-prompt.

I'm particularly impressed with the "database prompt" experiment, which seems to suggest that GPT-3 has some level of self-referential reasoning, as you can tell it what it does and does not know in the context of a session, and it will "act out" an exchange based on that epistemology.


Kind of. I’ve been using it, and it gets stuff wrong (hallucinations) or it just doesn’t know about things (I.e. the new additions to GameLift).

Granted GPT is what I use to do this now. But I’m also ready to see what focused, tailored AI would be able to do.

P.S. if any Amazon engineers see this and there is indeed a focused AI in the works, I would love to be a pre-alpha/alpha tester. I am willing to provide useful kind feedback.


Amazing open source effort. My immediate thought is building an AI generic player that can train itself and play (most) of the games

What r your opinions ?


AI Dungeon uses it to excellent effect, but with extensive fine tuning and tweaks.

Author of the game here. I don't mention it in the article or the video, but AI Dungeon is exactly what I didn't set out to do. GPT-2/3-based games are cool, but they have no ground truth.

Working on an general game playing artificial intelligence. Nothing grand, I just have a few ideas I'd like to test in real life. It can theoretically play any game, but it's interesting to see if it's actually feasible, since it's turning out to be quite a resource hog.

I think the difficulty will be teaching tricks to the AI, or making them output actions in the right format.

Pygmalion + Tavern/Kobold can run a very convincing chatbot with personality on about 10GiB of VRAM, even better with more. The less VRAM you dedicate, the easier the model runs.

For about a thousand dollars you can get an NVIDIA A4000 with 16GiB of VRAM. The GPU isn't particularly fast, but VRAM seems much more important with these models anyway. Gaming GPUs (which perform worse on ML tasks but are still equipped with plenty of VRAM) cost more but are already in the hands of high-end home desktop users.

The hardware is ready and available for those wanting to get started. All you need to do is find a way to connect the output of the AI into some kind of smart home system+interact with cron jobs. A few weekends with the Home Assistant API may be enough to get most things done already.

The only thing I haven't seen open source AIs do is look for information online, like Bing Chat does.


Thanks for the lead. I looked into Perplexity.ai and I am really enjoying the experience so far.

Jared, appreciate the feedback!. Any chance you could introduce me to some people in the Open AI team? I think this would be a great fit and I'm very interested in their thoughts.

No, not directly. DeepMind Lab is a 3D environment that can be highly customized -- looks like its built on an old Quake engine. Their pitch seems to include a lot of real world task simulation. OpenAI Universe is made to sandbox and emulate existing PC software being used with mouse and keybaord input.

At least, that's my non-expert understanding.


Open Ai and similar

Yes, an Open AI

Not neceessarily. Right now AIs like this take lots of compute and resources to train. But as we've seen with Stable Diffusion, it's likely in coming years we will scale them down and create more open-source ones so that indie devs can train them and players can run them on their own gaming GPU.
next

Legal | privacy