files
parameter should include a dictionary of files for the safetensors model which includes the file names and SHA256 digest of each file. Use /api/blobs/:digest to first push each of the files to the server before calling this API. Files will remain in the cache until the Ollama server is restarted.curl --location --request POST 'http://localhost:11434/api/create' \
--header 'Content-Type: application/json' \
--data-raw '{
"model": "bert-base-chinese",
"files": {
"config.json": "a1b2c3d4e5f6",
"generation_config.json": "b2c3d4e5f6g7",
"special_tokens_map.json": "c3d4e5f6g7h8",
"tokenizer.json": "d4e5f6g7h8i9",
"tokenizer_config.json": "e5f6g7h8i9j0",
"model.safetensors": "f6g7h8i9j0k1"
}
}'
{"status":"converting model"}
{"status":"creating new layer sha256:05ca5b813af4a53d2c2922933936e398958855c44ee534858fcfd830940618b6"}
{"status":"using autodetected template llama3-instruct"}
{"status":"using existing layer sha256:56bb8bd477a519ffa694fc449c2413c6f0e1d3b1c88fa7e3c9d88d3ae49d4dcb"}
{"status":"writing manifest"}
{"status":"success"}