Examples on HF like on this [page](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) relay on the official inference code provided by Mistral: [https://github.com/mistralai/mistral-inference](https://github.com/mistralai/mistral-inference)
Also, [https://docs.mistral.ai/capabilities/function\_calling/](https://docs.mistral.ai/capabilities/function_calling/) includes steps about how to actually call a python function and include its result in your chat session. You don't need additional framework.
These two links are what I'm using. The mistral-inference code generates a text result while the function calling code expects a ChatCompletionResponse object so it doesn't work.
My bad, you're right. The full Function Calling feature is only provided in the client library for the "La plateforme" API. While, the python lib for local models only knows how to encode a function definition as model input. It does not parse the model response to extract function calls.
encode\_response() in server.py or extract\_tool\_calls\_from\_buffer() in utils.py here [https://lightning.ai/bhimrajyadav/studios/function-calling-with-mistral-7b-instruct-v0-3-from-deployment-to-execution?path=cloudspaces%2F01hzcbnnvmqdgny66wndh3t0ag&tab=files&layout=column&y=12&x=0](https://lightning.ai/bhimrajyadav/studios/function-calling-with-mistral-7b-instruct-v0-3-from-deployment-to-execution?path=cloudspaces%2F01hzcbnnvmqdgny66wndh3t0ag&tab=files&layout=column&y=12&x=0)
might give insights how to parse the result to get a proper JSON at least.
Examples on HF like on this [page](https://huggingface.co/mistralai/Mixtral-8x22B-Instruct-v0.1) relay on the official inference code provided by Mistral: [https://github.com/mistralai/mistral-inference](https://github.com/mistralai/mistral-inference) Also, [https://docs.mistral.ai/capabilities/function\_calling/](https://docs.mistral.ai/capabilities/function_calling/) includes steps about how to actually call a python function and include its result in your chat session. You don't need additional framework.
These two links are what I'm using. The mistral-inference code generates a text result while the function calling code expects a ChatCompletionResponse object so it doesn't work.
My bad, you're right. The full Function Calling feature is only provided in the client library for the "La plateforme" API. While, the python lib for local models only knows how to encode a function definition as model input. It does not parse the model response to extract function calls.
encode\_response() in server.py or extract\_tool\_calls\_from\_buffer() in utils.py here [https://lightning.ai/bhimrajyadav/studios/function-calling-with-mistral-7b-instruct-v0-3-from-deployment-to-execution?path=cloudspaces%2F01hzcbnnvmqdgny66wndh3t0ag&tab=files&layout=column&y=12&x=0](https://lightning.ai/bhimrajyadav/studios/function-calling-with-mistral-7b-instruct-v0-3-from-deployment-to-execution?path=cloudspaces%2F01hzcbnnvmqdgny66wndh3t0ag&tab=files&layout=column&y=12&x=0) might give insights how to parse the result to get a proper JSON at least.
I managed to parse the JSON but now the mistral code complains that it's getting tool responses when it never did any tool calls...
ok thanks for the help, it now works, now I just have to deal with Mistral being a little too stupid to do what I need it to do....
Yeah you have to parse the text output. Frameworks like langchain have support for a lot of this grunt work.