WHAT YOU'LL LEARN
  • how AI fits into the Webiny development workflow
  • what AI is good at, and where you still need to lead
  • how to write effective prompts for Webiny-related tasks
  • how to review and refine AI-generated code

Overview
anchor

Webiny is designed to be AI-programmable — its typed APIs, consistent extension patterns, and MCP server all make it easier for AI coding agents to generate correct, platform-native code. But getting good results from AI requires more than just asking it to build something. The best outcomes come when you understand the platform, give the agent the right context, and review the output critically.

This guide covers the practical workflow for using AI effectively when building with Webiny.

The Development Workflow
anchor

A productive AI-assisted workflow with Webiny typically follows this pattern:

  • Understand the area of Webiny you want to change — which app, which extension point, what the expected behavior is
  • Gather context — let the agent load the relevant MCP skill, point it at existing code, or reference docs
  • Ask for a focused implementation — a single extension, a model change, a lifecycle hook — not an entire system at once
  • Review the result — check that it follows Webiny patterns, uses the right imports, and belongs in the right place
  • Test locally — use yarn webiny watch to verify the change works
  • Iterate — refine through short follow-ups rather than starting over

This approach treats AI as a capable teammate who needs clear direction, not a magic button.

Where AI Helps the Most
anchor

AI is especially effective when you already know what you want to build and need help getting there faster. Common tasks where AI adds real value include:

  • Scaffolding extensions — content models, lifecycle hooks, GraphQL schemas, Admin UI components
  • Generating boilerplate — the ModelFactory, GraphQLSchemaFactory, and EventHandler patterns follow repeatable structures that AI handles well
  • Wiring integrations — connecting lifecycle events to external services, setting up API keys, building SDK queries
  • Explaining code — asking the agent to explain unfamiliar parts of an existing Webiny project
  • Debugging — describing an error and asking for likely causes and fixes

The MCP server makes these tasks significantly more reliable because the agent can load the exact skill it needs — field types, validator options, DI services, event handler signatures — before writing code.

Where You Still Need to Lead
anchor

AI generates code, but it does not own the consequences of that code. You are still responsible for:

  • Choosing the right extension point — should this be a lifecycle hook, a use case override, or a custom GraphQL resolver?
  • Deciding where code lives — API extension vs Admin extension vs infrastructure extension
  • Validating business rules — AI does not know your domain logic unless you tell it
  • Checking security — access control, tenant boundaries, and permission implications
  • Ensuring upgrade-safety — code that modifies the wrong layer or bypasses extension points may break on future Webiny updates

Think of AI as an accelerator. You set the direction; it helps you get there faster.

Writing Effective Prompts
anchor

The quality of AI output is directly tied to the quality of your prompt. When working with Webiny, good prompts share a few characteristics:

  • Specific about what to build — “Add a publishDate datetime field to the Article model” is far better than “Add date support”
  • Clear about where it goes — “Create an API extension at extensions/ArticlePublishHook.ts” removes ambiguity
  • Explicit about constraints — “Use the EntryBeforePublishEventHandler pattern, filter by article modelId, and follow existing DI patterns”
  • State the expected output — “Give me the full file, including the createImplementation export and the registration line for webiny.config.tsx

A vague prompt like this produces unreliable results:

“Add approval logic to Webiny.”

A better version provides the context the agent needs:

“Create an API extension that hooks into the EntryBeforePublish event for the article model. If the status field value is not approved, prevent publishing by throwing an error. Use the standard EventHandler pattern with DI. Place the file at extensions/ArticleApprovalHook.ts.”

Giving AI the Right Context
anchor

Before asking for code, make sure your agent has the context it needs. With the Webiny MCP server connected, the agent can load skills automatically. You can also help by:

  • Pointing at existing code — “Look at extensions/ArticleModel.ts and follow the same pattern”
  • Referencing a skill explicitly — “Load the lifecycle-events skill before implementing this”
  • Sharing constraints — “This project uses DynamoDB only, no OpenSearch”
  • Providing the current webiny.config.tsx — so the agent knows what extensions are already registered

The more context you provide, the less the agent needs to guess — and the fewer corrections you need to make afterward.

Breaking Work Into Steps
anchor

AI performs better on focused tasks than on large, ambiguous requests. Instead of asking for an entire feature at once, break the work into steps:

  • Define the content model
  • Add the lifecycle hook
  • Create the Admin UI component
  • Wire up the registration in webiny.config.tsx
  • Test and refine

Each step produces a reviewable result. If something is wrong, you catch it early and fix it before building on top of it.

Reviewing AI-Generated Code
anchor

When the agent delivers code, go beyond “does it compile?” and ask:

  • Does it use the right pattern?ModelFactory, GraphQLSchemaFactory, EventHandler, or AdminConfig depending on what it does
  • Is it in the right place? — API extension for backend logic, Admin extension for UI changes
  • Does it use the correct imports?webiny/ prefix (not @webiny/), correct module paths
  • Does it follow the DI pattern? — dependencies declared in the constructor and listed in the dependencies array in the same order
  • Is the export correct? — API extensions must use export default with createImplementation
  • Does it register properly? — the right <Api.Extension> or <Admin.Extension> line in webiny.config.tsx

These checks take seconds and prevent the kind of subtle bugs that are harder to find later.

Common Mistakes to Watch For
anchor

A few patterns come up frequently when reviewing AI-generated Webiny code:

  • Wrong import prefix — using @webiny/ instead of webiny/ for v6 APIs
  • Missing model filter — lifecycle hooks fire for all models by default; the handler must check modelId to avoid running on unrelated content
  • Inventing patterns — generating custom abstractions instead of using the built-in factories (ModelFactory, GraphQLSchemaFactory, etc.)
  • Skipping registration — creating the extension file but forgetting the <Api.Extension> or <Admin.Extension> entry in webiny.config.tsx
  • Overbuilding — generating a complex solution when a simpler extension point already exists

If you notice these, correct the agent and it will usually adapt quickly.

AI learns from your corrections

Most AI coding agents improve within a conversation as you correct them. If the agent uses the wrong import path or pattern, point it out once. It will typically follow the correction for the rest of the session.

Putting It All Together
anchor

The practical workflow looks like this:

  • Connect the MCP server (covered in Connect Your AI Environment)
  • Decide what you want to build
  • Ask the agent to load the relevant skill
  • Request a focused implementation with clear constraints
  • Review the output against the checklist above
  • Test locally with yarn webiny watch
  • Deploy with yarn webiny deploy api or yarn webiny deploy admin
  • Iterate as needed

This approach makes AI a genuine force multiplier — you move faster, write less boilerplate, and still maintain full control over the quality and architecture of your project.