Compiled Meta-Llama-3-8B-Instruct using optimum-neuron
optimum-cli export neuron --model NousResearch/Meta-Llama-3-8B-Instruct --batch_size 1 --sequence_length 1024 --num_cores 2 --auto_cast_type fp16 ./models/NousResearch/Meta-Llama-3-8B-Instruct```
- Downloads last month
- 53
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.