April 10, 2024
what we think

4 Evolving Areas of the LLM Tech Stack

As trusted AI development consultants we’re always following the latest trends and tools in accelerating adoption of LLMs and AI within software applications. The four areas where we see the highest evolution of new tools and technologies are:

LLM Monitoring and Evaluations

The wide array of possible responses from LLMs as well as their prompt brittleness make it important to be able to systematically monitor LLM outputs in test and production, as well as run simulations before making changes. In some cases, real time evaluations and guardrails can be used to block harmful responses to the user or prevent attacks against the LLM based system.

Vector Databases & Information Retrieval Mechanisms for AI

Vector databases and Semantic search took the information retrieval worlds by storm at the end of 2022 promising to provide private memory for AI applications. Hybrid Semantic/Lexical search approaches have quickly emerged and large incumbents have stepped into the space to compete with new entrants.

LLM Developer Toolkits

LLM Toolkits have served to accelerate experimentation and establish technical design patterns. Even if the preference is to use in-house code over open source toolkits, all AI developers should be familiar with the concepts and patterns in these toolkits.

Model Creation and Ops Toolkits

Once the problem of very few large tech companies creating foundational LLMs, as model fine-tuning becomes more common these training and ops toolkits are critical in controlling costs of training, optimizing custom model performance and managing rollout of new models.

Key Players in Each Segment

More of what we think about AI

More of what we think about Design

More of what we think about Marketing