Do software systems integrating with AI still benefits from good software architecture?

I’ve been thinking about this as I see more people implementing business processes using AI agents. Non-engineers can only rely on no-code platforms like n8n to implement them, or low-code frameworks like Windmill. As an engineer, I see agents as another component to design, implement and integrate in the system, most likely living together with other non-AI-related components.

Let’s take Clean Architecture. The interaction between business entities, services and repositories happen in the use cases (ex. CompleteTask). What if we add AI as part of completing the use case?

My guts tell me that we still need to model our business logic to be independent from the technology layer. I may be classifying an email, but I don’t care how: it could be a ML algorithm, a call to an AI service, or a text-based algorithm. AI is infrastructure, not architecture.

The good practice would still be to create an abstraction exposing a stable API, so to decouple its users from its implementation. Most of all with the current pace of AI development—APIs change, models get deprecated, new capabilities emerge monthly. The volatility makes the case for abstraction stronger, not weaker.

The contract should be domain-typed, not prompt-shaped. Something like EmailClassifier.classify(email: Email): Classification, not AIService.complete(prompt: string): string. The abstraction expresses what you need in business terms.

For testing, I’d rather test e2e, faking the result from the LLM. You’re testing the system’s behavior given certain AI outputs, not the AI itself. Worth covering edge cases: low confidence responses, failures, unexpectedly structured output. This forces you to make explicit how your use case handles AI ambiguity.