- AIPressRoom
- Posts
- 5 steps to ensure startups successfully deploy LLMs
5 steps to ensure startups successfully deploy LLMs
Lu Zhang, the founder and managing partner of Fusion Fund, is a renowned Silicon Valley–based investor and a serial entrepreneur in healthcare.
ChatGPT’s launch ushered in the age of large language models. In addition to OpenAI’s offerings, other LLMs include Google’s LaMDA family of LLMs (including Bard), the BLOOM project (a collaboration between groups at Microsoft, Nvidia, and other organizations), Meta’s LLaMA, and Anthropic’s Claude.
More will no doubt be created. In fact, an April 2023 Arize survey found that 53% of respondents planned to deploy LLMs within the next year or sooner. One approach to doing this is to create a “vertical” LLM that starts with an existing LLM and carefully retrains it on knowledge specific to a particular domain. This tactic can work for life sciences, pharmaceuticals, insurance, finance, and other business sectors.
Deploying an LLM can provide a powerful competitive advantage — but only if it’s done well.
LLMs have already led to newsworthy issues, such as their tendency to “hallucinate” incorrect information. That’s a severe problem, and it can distract leadership from essential concerns with the processes that generate those outputs, which can be similarly problematic.
The challenges of training and deploying an LLM
One issue with using LLMs is their tremendous operating expense because the computational demand to train and run them is so intense (they’re not called large language models for nothing).
LLMs are exciting, but developing and adopting them requires overcoming several feasibility hurdles.
First, the hardware to run the models on is costly. The H100 GPU from Nvidia, a popular choice for LLMs, has been selling on the secondary market for about $40,000 per chip. One source estimated it would take roughly 6,000 chips to train an LLM comparable to ChatGPT-3.5. That’s roughly $240 million on GPUs alone.
Another significant expense is powering those chips. Merely training a model is estimated to require about 10 gigawatt-hours (GWh) of power, equivalent to 1,000 U.S. homes’ yearly electrical use. Once the model is trained, its electricity cost will vary but can get exorbitant. That source estimated that the power consumption to run ChatGPT-3.5 is about 1 GWh a day, or the combined daily energy usage of 33,000 households.
Power consumption can also be a potential pitfall for user experience when running LLMs on portable devices. That’s because heavy use on a device could drain its battery very quickly, which would be a significant barrier to consumer adoption.
The post 5 steps to ensure startups successfully deploy LLMs appeared first on AIPressRoom.