Designing SDKs for AI agents vs humans

Designing SDKs for AI agents vs humans

Amrutha GujjarAmrutha Gujjar5 min read

Category: Engineering


When we started building Preswald, our library for creating data apps, we wanted to make sure it was easy to generate code from it. With just a few prompts, we imagined generating code that could ingest data, query data, apply transformations, and create visualizations, all demonstrating the SDK’s capabilities.

As we worked on making this possible, we realized that we basically needed to design for coding agents as users/consumers of the SDK too. Every API endpoint, function signature, and example in the documentation had to be designed to work for humans but also machines. This meant focusing on clear, structured, and predictable interfaces that could support this dual purpose.

This blog explores how SDK design is evolving to meet these needs, covering challenges like structured inputs, dual-layer documentation, and edge-case handling.

Prompt Interfaces: The New Dev Tool Distribution

One of the most effective ways to demonstrate SDK capabilities today is through a prompt-based interface. These allow users (or coding agents) to explore the SDK by generating code in response to natural language queries.

For example, a user might input: "Create a dashboard showing sales trends by region."

A well-designed prompt interface could generate boilerplate code that:

  1. Ingests data from provided connection details

  2. Queries data from a database.

  3. Transforms the data to group it by region.

  4. Visualizes it, generates some charts

Pairing this with live deployment tools, like instant previews or hosted environments, lets users immediately see the results. This reduces friction in onboarding and adoption by letting developers and agents explore the SDK's capabilities interactively, without extensive setup.

Prompt-based interfaces effectively turn the SDK into a conversational, interactive experience. They make the SDK accessible to both developers and AI agents while serving as a powerful distribution mechanism.

Best Practices for Designing Agent-Compatible SDKs

Machine-Readable Specifications

Coding agents rely on machine-readable formats to infer the SDK’s capabilities and usage patterns. Provide detailed specifications in formats like OpenAPIJSON Schema, or GraphQL SDL:

  • Endpoints and Methods: List all available endpoints with descriptions of their purposes.

  • Parameters: Define the data types, constraints (e.g., required/optional), and default values for each parameter.

  • Responses: Include structured response formats for all cases (success, partial success, failure), and provide detailed error codes.

Coding agents learn from usage patterns. Provide exhaustive examples of how to interact with the SDK, demonstrating typical workflows:

  • Step-by-Step Code Samples. For each major use case, show a sequence of API calls, from initialization to result processing.

  • Cover variations like optional parameters, handling edge cases, and retry mechanisms.

Metadata for Semantic Understanding

  • Intent Annotations. Include tags or descriptors for each endpoint or method to explain its purpose.
    Example: "create_user" -> { "tag": "user-management", "purpose": "Creates a new user account" }

  • Define logical connections/relationships between methods, such as required sequences or dependencies.
    Example: authenticate_user -> Required before calling get_user_data.

Prompt Engineering Examples

For coding agents that support prompt-driven interaction, provide query-response examples to train the agent on how to ask about and use the SDK:

  • Natural Language Queries: Show how a user might ask for functionality.

  • LLM-Friendly Responses: Provide structured, example-driven answers to these queries.

Contextual Documentation

  • Combine human-readable guides (Markdown/HTML) with machine-parsable formats (OpenAPI specs).

  • Coding agents often use pre-trained data that lacks context on niche tools. Answer common questions explicitly, e.g., "How do I authenticate?" or "What happens when a parameter is missing?"

Error Context and Debugging Feedback

  • Clearly categorize errors (e.g., validation errors, authentication errors, system errors).

  • Provide suggestions on how to resolve errors within the error message itself.

Versioning and Change Logs

  • Version Tags. Include API version numbers in specs and documentation.

  • Changelogs. Provide machine-readable logs summarizing changes (e.g., added endpoints, deprecated features).

The takeaway

The rise of AI agents as SDK consumers will fundamentally change how we should design developer tools. Prompt-based interfaces, which allow natural language queries to generate meaningful code and workflows, are becoming an important part of the developer product experience. They reduce onboarding friction, showcase capabilities interactively, and make SDKs accessible to both developers and agents.

Give us a star on GitHub if you want to follow along our work: https://github.com/StructuredLabs/preswald