Primary image for Let’s Write a Time Logging Assistant

Let’s Write a Time Logging Assistant

In this article, we’ll walk through building a purpose-driven AI agent to help users log their time. The assistant will interact with the user, collect the required data (project name, duration, and description), validate the input, and save the time log entry. This example demonstrates how to structure an LLM (Language Model-powered) agent in Python for a specific goal.

The Purpose

A clear purpose defines the boundaries and functionality of an AI agent. Our agent aims to:

  • Prompt the user for three pieces of information: project name, duration, and a description of the task.
  • Validate the project name dynamically against a predefined list.
  • Save the collected data and terminate the conversation.

This focused purpose ensures that every interaction is meaningful and avoids ambiguity in user-agent conversations.

Key Features

Here are the key features of our time logging assistant, enhanced with relevant code snippets to highlight their implementation (The full python script is at the bottom of the article).

1. Project Name Validation

The assistant ensures that project names provided by the user are validated dynamically. This is achieved using a combination of pydantic, Python’s Annotated type, and custom validation logic.

ProjectName = Annotated[str, AfterValidator(check_project)]

def check_project(name: str) -> str:
    project = get_project(name)
    assert project is not None, "Project not found"
    return project
  • ProjectName: Annotated type ensures the input is checked against the check_project function.
  • check_project: Validates the project name by checking against a list of predefined options. If the name isn’t found, it raises an exception.

The TimeLogEntry model integrates this validation:

class TimeLogEntry(BaseModel):
    """Time log entry model."""
    project: ProjectName = Field(description="The project name of the time log entry.")
    duration: str = Field(description="The duration of the time log entry.")
    description: str = Field(description="The description of the time log entry.")

By combining pydantic validation with Annotated, the agent guarantees that only valid project names can proceed.

2. Tool to Retrieve the List of Projects

The assistant includes a list_projects tool, which retrieves and returns the available project names. This tool helps users choose a valid project name and eliminates guesswork.

def list_projects() -> list[str]:
    """
    List the project names.
    """
    return ["project1", "project2", "project3"]

The tool is dynamically added to the agent during initialization:

agent = Agent(tools=[get_project, list_projects])

3. Agent State Management

The assistant maintains its state to never loose context and enable a better user experience:

  • Message History: Tracks the entire conversation for continuity in multi-turn interactions.
  • Conversation Status: Determines whether the session should continue or terminate.
class Agent(BaseModel):
    messages: list[openai.OpenAIMessageParam] = []
    next_action: Literal["continue", "stop"] = Field(
        description="The next action to take", default="continue"
    )

Full Script

Below is the complete code for the time logging assistant:

import json
from typing import Annotated, Callable, Literal
import dotenv

from mirascope.core import BaseDynamicConfig, Messages, openai
from pydantic import AfterValidator, BaseModel, Field


dotenv.load_dotenv()


def list_projects() -> list[str]:
    """
    List the project names.

    The names of the project are hardcoded for the demo.
    The case of the project name is important; do not change it or add extra spaces.
    """
    return ["project1", "project2", "project3"]


def get_project(name: str) -> str | None:
    """
    Get the project name.
    """
    if name in list_projects():
        return name
    else:
        return None


def check_project(name: str) -> str:
    project = get_project(name)
    assert project is not None, "Project not found"
    return project


ProjectName = Annotated[str, AfterValidator(check_project)]


class TimeLogEntry(BaseModel):
    """Time log entry model."""
    project: ProjectName = Field(description="The project name of the time log entry.")
    duration: str = Field(description="The duration of the time log entry.")
    description: str = Field(description="The description of the time log entry.")


class Agent(BaseModel):
    messages: list[openai.OpenAIMessageParam] = []
    tools: list[Callable] = Field(description="The tools to be called by the agent.")
    next_action: Literal["continue", "stop"] = Field(
        description="The next action to take", default="continue"
    )

    def save_time_log_entry(self, time_log: TimeLogEntry) -> str:
        """Save the time log entry to the database."""
        print(f"Saving time log entry: {time_log.model_dump_json()}")
        self.next_action = "stop"
        return f"Saved time log entry: {time_log.model_dump_json()}"

    @openai.call("gpt-4o-mini")
    def _call(self, txt="") -> BaseDynamicConfig:
        tools = [self.save_time_log_entry, *self.tools]
        messages = [
            Messages.System("""
            You are an AI assistant that helps employees with time logging.
            You are going to ask questions to the user to capture
            the time log for the day.

            You can ask the user the project name, duration, and description. Once you have all the information, call the save_time_log_entry tool to save the time log entry and close the conversation.
            """),
            *self.messages,
        ]
        return {
            "messages": messages,
            "tools": tools,
        }

    def _step(self, query: str) -> str:
        if query:
            self.messages.append(Messages.User(query))
        response = self._call(query)

        self.messages.append(response.message_param)
        tools_and_outputs = []
        if tools := response.tools:
            for tool in tools:
                print(f"[Calling Tool '{tool._name()}' with args {tool.args}]")
                call_result = tool.call()
                if isinstance(call_result, (dict, list)):
                    call_result = json.dumps(call_result)
                elif call_result is None:
                    call_result = ""
                elif not isinstance(call_result, str):
                    raise NotImplementedError(
                        f"Unsupported call result type: {type(call_result)}"
                    )
                tools_and_outputs.append((tool, call_result))
            resp_tool_messages = response.tool_message_params(tools_and_outputs)
            self.messages += resp_tool_messages
            step_result = self._step("")
            return step_result
        else:
            return response.content

    def run(self) -> None:
        while True:
            if not self.messages:
                query = "Welcome the user to the time logging assistant and explain what the assistant does and what you want the user to do."
            else:
                query = input("(User): ")
            if query in ["exit", "quit", "q"]:
                break
            print("(Assistant): ", end="", flush=True)
            step_output = self._step(query)
            print(step_output)
            if self.next_action == "stop":
                print("Conversation finished and time logged.")
                break


def main() -> None:
    agent = Agent(tools=[get_project, list_projects])
    agent.run()


if __name__ == "__main__":
    main()

Conclusion

This time logging assistant demonstrates how purpose-driven agents can combine conversational intelligence, robust validation, and modular tools. With a clear focus on collecting and validating time logs, the agent provides a seamless user experience. This approach can be adapted for a variety of use cases.

Yann Malet

About the author

Yann Malet

Yann builds and architects performant digital platforms for publishers. In 2015, Yann co-authored High-Performance Django with Peter Baumgartner. Prior to his involvement with Lincoln Loop, Yann focused on Product Lifecycle Management systems (PLM) for several large …

View Yann's profile