How to convert it to gguf format
I used the hugginface cli download command line to download the model in its entirety and also prepared the llama.cpp tool.
But when I execute the conversion command, it prompts that there is no config. json file. Is this file necessary? Why is there no config. json file in the directory.
This is the error log
[root@localhost ~]# python /app/llama.cpp/convert_hf_to_gguf.py /root/instruct-pix2pix/ --outfile /root/instruct-pix2pix.gguf
INFO:hf-to-gguf:Loading model: instruct-pix2pix
Traceback (most recent call last):
File "/app/llama.cpp/convert_hf_to_gguf.py", line 5112, in
main()
File "/app/llama.cpp/convert_hf_to_gguf.py", line 5080, in main
hparams = Model.load_hparams(dir_model)
File "/app/llama.cpp/convert_hf_to_gguf.py", line 468, in load_hparams
with open(dir_model / "config.json", "r", encoding="utf-8") as f:
FileNotFoundError: [Errno 2] No such file or directory: '/root/instruct-pix2pix/config.json'