• AIPressRoom
  • Posts
  • Casually Run Falcon 180B LLM on Apple M2 Extremely! FASTER than nVidia?

Casually Run Falcon 180B LLM on Apple M2 Extremely! FASTER than nVidia?

It is solely been 24 hours and the newly launched Falcon 180B (180 BILLION Parameter LLM) LLM has already be modified to run inference on an Apple M2 Extremely chip with 192GB of ram. That is completely insane, on prime of different open supply builders additionally discovering methods to run the mannequin, slower, on 32 core CPUs as nicely.

What do you suppose this implies for the way forward for LLMs?

Tell us within the feedback!

Hire FAST nVidia GPUs on Huge AI: https://cloud.vast.ai/?ref_id=74601