Key Concepts

Text Generation Models

Avian's text generation models have been trained to understand natural and formal language. These models allow text outputs in response to their inputs. The inputs to these models are referred to as "prompts". Designing a prompt is essentially how you "program" a model, usually by providing instructions or some examples of how to successfully complete a task. The models available through the Avian API can be used across a great variety of tasks including content or code generation, summarization, conversation, creative writing, and more. Read more in our introductory text generation guide and in our prompt engineering guide.

Assistants

Assistants in the Avian API refer to entities powered by large language models that are capable of performing tasks for users. These assistants operate based on the instructions embedded within the context window of the model. They are designed to understand and respond to user inputs, carrying out a wide range of text-based tasks. The capabilities of these assistants are defined by the underlying models and the specific implementation within the Avian API.

Embeddings

While the Avian API currently focuses on text generation, understanding embeddings is still relevant. An embedding is a vector representation of a piece of data (e.g., some text) that is meant to preserve aspects of its content and/or its meaning. Chunks of data that are similar in some way will tend to have embeddings that are closer together than unrelated data. Embeddings are useful for search, clustering, recommendations, anomaly detection, classification, and more. While not directly offered as a separate feature, the concepts of embeddings are often leveraged in the underlying mechanisms of language models.

Tokens

Text generation models in the Avian API process text in chunks called tokens. Tokens represent commonly occurring sequences of characters. For example, the string " tokenization" might be decomposed as " token" and "ization", while a short and common word like " the" is typically represented as a single token. Note that in a sentence, the first token of each word typically starts with a space character. As a rough rule of thumb, 1 token is approximately 4 characters or 0.75 words for English text.

One limitation to keep in mind is that for a text generation model, the prompt and the generated output combined must be no more than the model's maximum context length. The maximum context lengths for each model can be found in the model specifications in our documentation.

Context Length

The context length refers to the maximum number of tokens a model can process in a single request, including both the input prompt and the generated output. Models in the Avian API have specific context length limits, which are crucial to consider when designing prompts and planning tasks. Longer context lengths allow for more complex interactions but may also affect processing time and resource usage.

For more detailed information on these concepts and how to best utilize the Avian API, please refer to our comprehensive documentation and guides on the Avian website.

Last updated