In critical systems, felxibility is a requirement
Author: Valeriano Sandrucci
Control and Feedback
In dynamic systems theory, effective control is not based on perfect prediction, but on feedback. The behavior of the system is observed, deviations from the objective are measured, and progressive corrections are introduced. The idea is not to anticipate every possible disturbance, but to build the capacity to react.
Software development in critical contexts should follow a similar logic.
The term critical does not only refer to fields such as aerospace or healthcare. It may describe a system that must sustain high workloads, guarantee operational continuity, integrate heterogeneous platforms, or continue functioning even in the presence of partial interruptions in external services. In other cases, it means that an error produces significant economic impact or that downtime blocks an entire operational line.
In situations like these, there is always one common element: a degree of unpredictability that cannot be completely eliminated. When this unpredictability is structural, flexibility is not a methodological luxury but a necessary property of the system.
The Limits of Total Prediction
In critical contexts, the most intuitive reaction is to try to analyze everything before starting. The more delicate the system appears, the stronger the temptation becomes to anticipate every scenario, every edge case, and every possible failure.
The problem is that in complex systems, analysis tends to expand indefinitely. Every deeper investigation introduces new variables, new dependencies, and new questions. This leads to a form of paralysis by analysis: the attempt to reduce risk through total understanding does not produce greater control, but delay.
Risk is not eliminated. It is simply pushed further into the future.
At the opposite extreme lies overengineering: building today all the complexity that might be needed tomorrow. Highly abstract architectures, sophisticated orchestration systems, and resilience mechanisms designed for hypothetical scenarios often generate the same result. The system becomes difficult to evolve and cognitive cost grows rapidly.
Fragility does not emerge from a lack of complexity, but from the lack of proportion between the system that has been built and the real problems it must solve.
Designed Flexibility
When talking about flexibility, it is easy to fall into a misunderstanding. Flexible does not mean working without structure, making random decisions, or constantly changing direction.
Effective flexibility is designed. It means proceeding through defined blocks of work, with explicit objectives and clear validation criteria. In this way, the ability to adapt remains contained within recognizable boundaries.
When these boundaries are missing, flexibility turns into organizational noise.
In critical systems, flexibility must be applied where uncertainty is real, not distributed indiscriminately throughout the entire system.
Complexity Where It Is Needed
A useful principle is to introduce complexity only when uncertainty requires it.
If an integration with an external system is unstable, it is reasonable to introduce resilience mechanisms such as queues, controlled retries, circuit breakers, and detailed monitoring. When instead a workflow is deterministic and under control, adding layers of abstraction or sophisticated mechanisms rarely increases system robustness. More often, it increases the cognitive cost for those who will have to understand and maintain it.
In a recent project, a company needed to integrate three legacy systems with a new application layer. The initial idea was to build a distributed and fully event-driven architecture designed for horizontal scalability.
The analysis of real workloads revealed a different picture. The main problem was not scalability, but the inconsistent quality of the data coming from one of the legacy systems. The critical issue was not the system’s ability to handle more requests, but its tolerance for data inconsistencies.
The useful flexibility therefore did not concern distributed infrastructure, but validation, reconciliation, and exception-management mechanisms. Without an iterative cycle capable of quickly exposing the system to real data, the investment would have been directed toward a secondary problem.
Short Cycles and Observation
If total prediction is unrealistic and improvisation is risky, a third path remains: working through short and controlled cycles.
Each cycle should have a clear sub-objective, a limited scope, and explicit validation criteria. At the end of the work, the system is observed and measured. It becomes possible to understand where it behaved as expected, where it showed fragility, and which initial assumptions proved correct.
This approach introduces into software development a control logic based on feedback. Uncertainty is not addressed by imagining infinite scenarios, but by building parts of the system, observing their behavior, and correcting subsequent decisions.
Two Levels of Flexibility
In critical systems there are two distinct levels of flexibility.
Architectural flexibility: the software’s ability to absorb changes without systemic failures.
Organizational flexibility: the team’s ability to modify priorities, redistribute effort, and revise decisions in light of new evidence.
When these two dimensions are not aligned, difficulties emerge. A modular architecture managed by a rigid organization slows down decision-making. A dynamic organization operating on a fragile architecture instead generates technical instability.
The robustness of a system emerges from the balance between these two components.
Stability and Adaptation
In critical contexts, stability and change are often seen as opposites. The assumption is that keeping a system stable requires avoiding modifications.
Over the long term, however, stability depends on the ability to adapt. When a system cannot evolve gradually, change still arrives, but in the form of crisis.
Designed flexibility enables progressive and controlled adaptation. The absence of flexibility produces sudden and difficult-to-manage changes.
A Property of the System
In critical systems, it is not realistic to completely eliminate uncertainty before starting. It is not sustainable to design today every possible future variation, nor is it prudent to rely on improvisation.
A more solid approach consists of working through clear cycles, introducing complexity only where evidence justifies it, and observing the real behavior of the system in order to progressively correct decisions.
Flexibility is not a generic attitude. It is a technical and organizational property that must be intentionally designed. In critical systems, it becomes an integral part of the system’s robustness itself.
Author: Valeriano Sandrucci
