In my testing, the chat and instruct-tuned versions of MPT-30B is very close to 3.5 for many tasks, but of course the team who made it got bought up immediately and it’s licensed only for non-commercial use. I’m hoping the open source community runs with the base model in the same way they did with LLaMA.
reply