To quick start, you can run Orca-2-13B with just one single command on your own device. The command tool automatically downloads and installs the WasmEdge runtime, the model files, and the portable Wasm apps for inference.
The Orca-2-13B, part of Microsoft's Orca 2 series, comes in 7B and 13B parameter versions, fine-tuned from the LLAMA 2 base models. This model excels in reasoning, text summarization, math problem-solving and comprehension tasks, building upon the original 13B Orca model. It imitates the reasoning process of advanced AI systems step by step, aiming to enhance smaller models’ capabilities in complex tasks.
In this article, we will cover
- How to run Orca-2-13B on your own device
- How to create an OpenAI-compatible API service for Orca-2-13B
We will use the Rust + Wasm stack to develop and deploy applications for this model. There are no complex Python packages or C++ toolchains to install! See why we choose the Rust+Wasm tech stack.
Run the model on your own device
Step 1: Install WasmEdge via the following command line.
curl -sSf https://raw.githubusercontent.com/WasmEdge/WasmEdge/master/utils/install.sh | bash -s -- --plugins wasmedge_rustls wasi_nn-ggml
Step 2: Download the model GGUF file. It may take a long time, since the size of the model is several GBs.
curl -LO https://huggingface.co/second-state/Orca-2-13B-GGUF/resolve/main/Orca-2-13b-ggml-model-q4_0.gguf
Step 3: Download a cross-platform portable Wasm file for the chat app. The application allows you to chat with the model on the command line. The Rust source code for the app is here.
curl -LO https://github.com/LlamaEdge/LlamaEdge/releases/latest/download/llama-chat.wasm
That's it. You can chat with the model in the terminal by entering the following command.
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Orca-2-13b-ggml-model-q4_0.gguf llama-chat.wasm -p chatml -s 'You are Orca, an AI language model created by Microsoft. You are a cautious assistant. You carefully follow instructions. You are helpful and harmless and you follow ethical guidelines and promote positive behavior.'
The portable Wasm app automatically takes advantage of the hardware accelerators (eg GPUs) I have on the device.
On my Mac M1 32G memory device, it clocks in at about 9.15 tokens per second.
[USER]: What is a Orca
[ASSISTANT]: An orca, also known as a killer whale, is a marine mammal belonging to the oceanic dolphin family, of which it is the largest member. Orcas are highly social and intelligent creatures known for their striking black-and-white coloring. They are found in oceans all around the world, from the Arctic and Antarctic regions to tropical seas.
[USER]:
The Orca paper from @MSFTResearch showed a way to use GPT4 generated chain-of-thought to fine-tune much smaller models. The goal is to “teach” small models to reason. Run an Orca-tuned 13B model on your mac: https://t.co/TDVhezB7To
— wasmedge (@realwasmedge) November 27, 2023
Here we asked it a logic question. The answer… pic.twitter.com/BOA1nQdzBg
Create an OpenAI-compatible API service
An OpenAI-compatible web API allows the model to work with a large ecosystem of LLM tools and agent frameworks such as flows.network, LangChain and LlamaIndex.
Download an API server app. It is also a cross-platform portable Wasm app that can run on many CPU and GPU devices.
curl -LO https://github.com/LlamaEdge/LlamaEdge/releases/latest/download/llama-api-server.wasm
Then, use the following command lines to start an API server for the model.
wasmedge --dir .:. --nn-preload default:GGML:AUTO:Orca-2-13B.Q5_K_M.gguf llama-api-server.wasm -p chatml
From another terminal, you can interact with the API server using curl.
curl -X POST http://0.0.0.0:8080/v1/chat/completions -H 'accept:application/json' -H 'Content-Type: application/json' -d '{"messages":[{"role":"system", "content":"You are a helpful AI assistant"}, {"role":"user", "content":"What is the capital of France?"}], "model":"Orca-2-13B"}'
That’s all. WasmEdge is the easiest, fastest, and safest way to run LLM applications. Give it a try!
Talk to us!
Join the WasmEdge discord to ask questions and share insights. Any questions getting this model running? Please go to second-state/LlamaEdge to raise an issue or book a demo with us to enjoy your own LLMs across devices!