Skip to main content

IconQuickstart

Getting the libraries

Libraries are published in Maven Central. You may need to add that repository explicitly in your build, if you haven't done it before. Then add the library in the usual way.

repositories {
mavenCentral()
}

dependencies {
implementation("com.xebia:xef-core:<version>")
}

We publish all libraries at once under the same version, so version catalogs could be useful.

By default, Xef connects to OpenAI. To use their services you should provide the corresponding API key in the OPENAI_TOKEN environment variable, and have enough credits.

env OPENAI_TOKEN=<your-token> <gradle-command>
caution

This library may transmit source code and potentially user input data to third-party services as part of its functionality. Developers integrating this library into their applications should be aware of this behavior and take necessary precautions to ensure that sensitive data is not inadvertently transmitted. Read our Data Transmission Disclosure for further information.

Your first prompt

After adding the library to your project you get access to the AI function, which is your port of entry to the modern AI world. Inside of it, you can prompt for information, which means posing the question to an LLM (Large Language Model).

The AI function in Xef DSL is a versatile tool for Kotlin developers, leveraging the power of AI models, including GPT-4 and others, for dynamic type inference and serialization. This function can interpret string inputs and return instances of various Kotlin data types, such as classes, enums, sealed classes, and more. Additionally, it supports different AI models and shared context through conversations.

The AI function is available in xef since 0.0.5-alpha.20.

The easiest way to get started is to just get the information back as a string.

import com.xebia.functional.xef.AI

suspend fun main() {
println(books("Artificial Intelligence"))
}

suspend fun books(topic: String): String =
AI("Give me a selection of books about $topic")

The books function above uses the conversation function to connect to OpenAI, and then uses the promptMessage function to ask for a selection of books about a given topic. The result is a string with the answer from the LLM.

Structure

The output from the books function above may be hard to parse back from the strings we obtain. Fortunately, you can also ask xef.ai to give you back the information using a custom type.

The library takes care of instructing the LLM on building such a structure, and deserialize the result back for you.

import com.xebia.functional.xef.AI
import kotlinx.serialization.Serializable

suspend fun main() {
val result = books("Heavy Metal")
result.books.forEach { println("""
Title: ${it.title}
Author: ${it.author}
""".trimIndent()) }
}

@Serializable
data class Books(val books: List<Book>)

@Serializable
data class Book(val title: String, val author: String)

suspend fun books(topic: String): Books =
AI("Give me a selection of books about $topic")

xef.ai reuses Kotlin's common serialization, which requires adding the kotlinx.serialization plug-in to your build, and mark each class as @Serializable. The LLM is usually able to detect which kind of information should go on each field based on its name (like title and author above). For those cases where the LLM is not able to infer the type, you can use the @Description annotation:

@Description annotations

import com.xebia.functional.xef.conversation.Description
import com.xebia.functional.xef.AI
import kotlinx.serialization.Serializable

suspend fun main() {
val result = books("Artificial Intelligence")
result.books.forEach { println("""
Title: ${it.title}
Author: ${it.author}
Summary: ${it.summary}
""".trimIndent()) }
}

@Serializable
@Description("A list of books")
data class Books(
@Description("The list of books")
val books: List<Book>
)

@Serializable
@Description("A book")
data class Book(
@Description("The title of the book")
val title: String,
@Description("The author of the book")
val author: String,
@Description("A 20 word summary of the book")
val summary: String
)

suspend fun books(topic: String): Books =
AI("Give me a selection of books about $topic")

All the types and properties annotated with @Description will be used to build the json schema description fields used for the LLM to reply with the right format and data in order to deserialize the result back.

1. Overview of AI Function

The AI function in Xef DSL empowers Kotlin developers to harness advanced AI models for dynamic data processing and decision-making. It analyzes a given string input and produces an output of the specified Kotlin data type, offering significant flexibility and power for various applications.

Key Features

  • Dynamic Type Inference: Automatically determines the appropriate Kotlin data type based on the input string.
  • Support for Various Data Types: Handles classes, enums, sealed classes, lists, primitives, and more.
  • Configurable AI Models: Default to GPT-4 but can be configured to use other models.
  • Shared Context Capability: Maintains conversation history for context-aware responses.
  • Asynchronous & Streaming Operation: Designed to work seamlessly within Kotlin's coroutines framework.

2. Classes and functions

The AI function can be used to instantiate complex types, making it ideal for scenarios requiring detailed data structuring and analysis.

Example: Environmental Analysis

@Serializable
data class EnvironmentalData(
val airQualityIndex: Int,
val dominantPollutants: List<String>,
val recommendations: String
)

suspend fun main() {
val environmentReport: EnvironmentalData =
AI("Generate a detailed environmental report for New York City")
println(environmentReport)
}

4. Choices, Sealed Classes, and Enums

Sealed classes in Kotlin represent restricted class hierarchies. The AI function can intelligently choose the appropriate subclass based on the input.

Example: Financial Advice System

@Serializable
sealed class FinancialAdvice {
data class Investment(val strategy: String, val riskLevel: String) : FinancialAdvice()
data class Savings(val tips: List<String>, val optimalSavingsRate: Double) : FinancialAdvice()
}

suspend fun getFinancialAdvice(scenario: String): FinancialAdvice =
AI("Provide financial advice for $scenario")

suspend fun main() {
val advice = getFinancialAdvice("planning for early retirement")
println(advice)
}

The AI function can also be effectively used to select enumeration values based on the provided context, which is especially useful for categorization tasks. At the moment the AI function supports enums whose values can be encoded as single tokens for max efficiency and lower latency. If your enum values are not single tokens, you can use the Sealed Class pattern instead.

Example: Sentiment Analysis

enum class Sentiment {
positive, negative
}

suspend fun main() {
val sentiment = AI<Sentiment>("I like Xef!")
println(sentiment) // Expected output: positive
}

Prompts

As the prompt gets bigger or more nuance you may want to break it into smaller parts. Prompts can hold messages, model and other configuration. We use the builder pattern to include messages and other prompts which get built before the chat completions endpoint.

import com.xebia.functional.xef.prompt.Prompt
import com.xebia.functional.xef.prompt.templates.system
import com.xebia.functional.xef.prompt.templates.assistant
import com.xebia.functional.xef.prompt.templates.user
import kotlinx.serialization.Serializable
import com.xebia.functional.xef.AI
import ai.xef.openai.StandardModel
import com.xebia.functional.openai.models.CreateChatCompletionRequestModel
import com.xebia.functional.xef.conversation.Description
import com.xebia.functional.xef.conversation.MessagePolicy
import com.xebia.functional.xef.conversation.MessagesFromHistory
import com.xebia.functional.xef.conversation.MessagesToHistory
import com.xebia.functional.xef.prompt.configuration.PromptConfiguration

@Serializable
@Description("A list of books")
data class Books(
@Description("The list of books")
val books: List<Book>
)

@Serializable
@Description("A book")
data class Book(
@Description("The title of the book")
val title: String,
@Description("The author of the book")
val author: String,
@Description("A 20 word summary of the book")
val summary: String
)


suspend fun books(topic: String): Books {
val model = StandardModel(CreateChatCompletionRequestModel.gpt_3_5_turbo_16k_0613)

val myCustomPrompt = Prompt(
model = model,
configuration = PromptConfiguration {
temperature = 0.0
maxTokens = 100
messagePolicy = MessagePolicy(
historyPercent = 50,
historyPaddingTokens = 100,
contextPercent = 50,
addMessagesFromConversation = MessagesFromHistory.ALL,
addMessagesToConversation = MessagesToHistory.ALL
)
}) {
+system("You are an assistant in charge of providing a selection of books about topics provided")
+assistant("I will provide relevant suggestions of books and follow the instructions closely.")
+user("Give me a selection of books about $topic")
}

return AI(myCustomPrompt)
}

suspend fun main() {
val books = books("Cooking")
println(books)
}

This style of prompting is more effective than simple strings messages as it describes a scene of how the LLM should behave and reply. We use different roles for each message constructed with the Prompt builder.

In a larger AI application it's common to end up with quite some template for prompts. Online material like this course and this tutorial explain some of the most important patterns, some of them readily available in xef.ai.

Context

LLMs have knowledge about a broad variety of topics. But by construction they are not able to respond to questions about information not available in their training set. However, you often want to supplement the LLM with more data:

  • Transient information referring to the current moment, like the current weather, or the trends in the stock market in the past 10 days.
  • Non-public information, for example for summarizing a piece of text you're creating within you organization.

These additional pieces of information are called the context in xef.ai, and are attached to every question to the LLM. Although you can add arbitrary strings to the context at any point, the most common mode of usage is using an agent to consult an external service, and make its response part of the context. One such agent is Search, which uses a web search service to enrich that context.

package com.xebia.functional.xef.conversation.contexts

import ai.xef.openai.StandardModel
import com.xebia.functional.openai.apis.EmbeddingsApi
import com.xebia.functional.openai.models.CreateChatCompletionRequestModel
import com.xebia.functional.xef.AI
import com.xebia.functional.xef.conversation.Conversation
import com.xebia.functional.xef.conversation.Description
import com.xebia.functional.xef.llm.fromEnvironment
import com.xebia.functional.xef.prompt.Prompt
import com.xebia.functional.xef.prompt.templates.user
import com.xebia.functional.xef.reasoning.serpapi.Search
import com.xebia.functional.xef.store.LocalVectorStore
import kotlinx.serialization.Serializable

@Serializable
data class Recommendation(
@Description("The location")
val location: String,
@Description("The weather forecast")
val weather: String,
@Description("The recommended clothing to wear with this kind of weather, min 50 words, required not blank")
val recommendation: String
)

suspend fun main() {
val model = StandardModel(CreateChatCompletionRequestModel.gpt_4_1106_preview)
val question = Prompt(model) {
+user("Based on this weather, what do you recommend I should wear?")
}
val conversation = Conversation(LocalVectorStore(fromEnvironment(::EmbeddingsApi)))
val search = Search(model = model, scope = conversation)
conversation.addContext(search("Weather in Cádiz, Spain"))
val recommendation = AI<Recommendation>(question, conversation = conversation)
println(recommendation)
}

To execute the code you need to add the xef-reasoning library to your project, and provide the SERPAPI_TOKEN environment variable with a valid token from SerpApi.

dependencies {
implementation("com.xebia:xef-reasoning:<version>")
}
env SERP_API_KEY=<your-api-key>