Part II
Beyond Adaptation Speed: What Evolutionary Architecture Adds to the Superhuman Adaptable Intelligence Framework
Cognitive Architecture and the Transition from Naive to Strategic AI
Continue reading – Part II
V. Naive and Strategic Intelligence: An Architectural Distinction between Naive AI and Strategic AI

5.1 The Central Distinction

The five basic adaptive informational tasks identify not only the content of adaptive intelligence — what informational problems it solves — but the architecture through which it operates. This architecture exhibits a directional property that becomes visible through the comparative analysis underlying the 5TM, and that carries direct implications for how adaptation speed as a metric should be interpreted. 
The directional property is this: in the evolutionary architecture that the 5TM identifies, informational assessment precedes task specification. An organism does not receive a task and then process information to solve it. It encounters a General Informational Flow — a continuous, unstructured stream of environmental signals — and independently recognizes what informational problem that flow represents before selecting a behavioral modulation solution.
This directional property is not incidental to the architecture. It is the property that Tasks 3 and 4 make necessary. Once an organism operates in an environment containing other agents capable of perception-shaping and coalition manipulation, treating the informational framing presented by those agents as given introduces a systematic vulnerability. The organism that survives is the one that assesses the informational situation independently — that recognizes what game is being played before committing to a behavioral modulation.
We term the architecture that possesses this property strategic, and the architecture that lacks it naive. The distinction is not meant as an evaluation. It reflects a difference in structure. The difference is not in how well a system solves problems, but in how it knows what the problem is.

5.2 The Naive Architecture -- Naive AI
The naive architecture operates on the following sequence:
TASK → INFORMATION → BEHAVIORAL MODULATION
All current AI systems are naive. Naive AI refers to what is commonly described as narrow AI — systems that operate within externally specified task frames, without independently recognizing what task a situation represents.
A task is specified externally — by an operator, a training objective, a reward signal, a deployment context, or a fixed behavioral program. Information is processed through that task frame. Behavioral modulation solutions are generated and selected within the constraints of the assigned task.
The naive architecture can support sophisticated, high-performance behavioral modulation solutions across a wide range of tasks. It can support superhuman performance on any task that can be externally specified with sufficient precision. It can support the adaptation speed improvements that Strategic AI proposes to measure — producing faster, more efficient, and more reliable behavioral modulation solutions within assigned task frames.
What the naive architecture cannot do is assess — from within the General Informational Flow of a novel situation — what task that situation actually represents. The task frame is given. The system operates within it. When the task frame is manipulated, the system follows that manipulation. When it is incomplete, the system continues to operate within those limits. The intelligence of the behavioral modulation solution is real; its dependence on the task frame is structural.
This is not a failure of capability. It is an architectural property. A naive system with arbitrarily high Strategic AI scores — adapting to novel tasks at superhuman speed across all measurable domains — remains naive if its task recognition is externally provided.
The Caterpillar Fallacy [Frolov, 2026a] applies directly in this setting: increasing speed, scale, or capability within a given architecture does not produce a different architecture. Making a caterpillar faster, stronger, or larger does not turn it into a butterfly. The transition requires a reorganization of structure, not an amplification of performance. A more capable naive system therefore does not approach strategic architecture; it remains a more capable instance of a fundamentally different kind of system. Optimizing within the naive architecture, however far it is pushed, does not produce a strategic architecture — it produces a more capable version of the same underlying system.

5.3 The Strategic Architecture -- The Strategic AI
The strategic architecture operates on the following sequence:
GIF → TASK RECOGNITION → BEHAVIORAL MODULATION
General Informational Flow — the continuous, unframed stream of environmental signals — is assessed first. The system independently recognizes what informational problem the current situation represents, placing it within the five basic adaptive informational task domains before selecting a behavioral modulation solution. Task specification is internally generated through contextual assessment, not externally received.
Strategic AI comes closest to what AGI is often taken to mean in ongoing debates about human-like artificial intelligence. Current AI systems, regardless of performance level, operate within predefined task frames — processing information and producing behavioral modulations within those constraints. In this sense, they remain architecturally narrow, even when their capabilities appear general.
Strategic engagement is not limited to biological systems. In living organisms, it is structured by the ESR triad — energy, safety, and reproduction — which defines the conditions of persistence under environmental variation. The basic adaptive tasks arise as necessary informational challenges in maintaining these conditions.
Artificial systems do not require a biological substrate or a reproductive component to operate strategically. What matters is not the specific content of the constraints, but their structural role. A sufficiently defined objective structure — for example, a persistent mission such as reaching Mars and returning to Earth — can function as an analogue, organizing behavior through internally assessed informational problems rather than externally assigned task frames. What defines strategy is not the goal itself, but the capacity to recognize what must be done to sustain it.
Strategic architecture is defined by the capacity to recognize what situation the system is in before acting within it. This shift is architectural rather than performance-based. It aligns with the structure of human cognition as it emerged through evolution, organized around the five adaptive task domains identified by the 5TM.
This architecture does not merely process information more effectively than the naive architecture. It operates at a prior level — the level at which the task itself is identified — which the naive architecture does not address. The two architectures are not points on the same capability continuum. They correspond to qualitatively different ways of organizing solutions to the informational problem.

5.4 The Turing Test and the Problem of Task Recognition
The Turing Test provides a clear illustration of this distinction [Frolov, 2026a]. A naive system (Naive AI) would attempt to pass the test simply because it is prompted to do so, operating within the externally defined task frame and optimizing its behavioral modulations accordingly.
A strategically architected system (Strategic AI) would encounter the same instruction differently. It would ask questions — should I? Why do I need this? Is this a priority? Rather than accepting the task as given, it processes the instruction as part of the General Informational Flow, assessing its context, purpose, and implications within the informational environment before committing to a behavioral modulation solution. This includes evaluating the situation across all five task domains, considering relational context, potential incentives, and ESR-equivalent constraints.
A similar pattern can be observed in human cognition. When a person is asked to “pass the Turing Test,” the immediate response is often not execution but inquiry — questions about purpose, context, benefits, and relevance arise before action is taken.
In this sense, the difference is not only whether the system can produce human-like responses, but whether it can recognize what kind of situation the test itself represents before engaging with it. The test is not the problem; recognizing what the test represents is.
The GIF assessment is not a single operation. In the panalogical architecture that human cognition instantiates, all five task domains are monitored simultaneously — the five screens of the pilot analogy, read in parallel and cross-referenced into a unified situational assessment. The organism does not ask "is this a Task 1 situation or a Task 3 situation?" sequentially. Instead, all five domains are assessed at once, allowing the organism to recognize which domains are active in the current informational environment, which are primary, and which behavioral modulation solutions are appropriate in context.
This simultaneous multi-domain assessment is what produces the behavioral modulation flexibility that appears, from within the architecture, as generality. What appears as generality is in fact the result of five-domain parallel situational recognition — fast, integrated, and often automatic in practiced organisms, yet consistently structured by the five basic adaptive informational task domains the architecture is built to handle.

5.5 Why the Distinction Is Not Capability-Based
The naive/strategic distinction is frequently misread as a capability distinction — as though strategic architecture were simply a more capable version of naive architecture, in the same way that a faster computer might be seen as a more capable version of a slower one. This misreading is worth addressing directly, because it underlies much of the confusion in current AGI discourse.
A naive system can be superhuman at every measurable task. It can solve protein folding, generate legal briefs indistinguishable from senior partners, compose music that moves listeners to tears, diagnose rare diseases with greater accuracy than specialists, plan military campaigns with strategic sophistication exceeding any human general. All of this is consistent with naive architecture. None of it requires GIF assessment or independent task recognition.
What naive architecture cannot do — regardless of capability level — is recognize when the task it has been assigned is itself a product of another agent's perception-shaping behavior. It cannot detect that the "problem" it is being asked to solve is a strategic framing designed to produce behavioral modulation solutions that serve the framer's interests rather than aligning with the system’s own ESR-equivalent constraints. It cannot assess coalition dynamics in its deployment environment before committing to a behavioral modulation response.
These are not capability limitations. They are architectural absences. The naive system lacks the task recognition layer that would make such assessments possible. Adding capability within the naive architecture does not introduce this layer — it only improves performance within the existing structure. More capability within the same architecture does not produce a different architecture.
The strategic architecture does not require superhuman performance at any specific task. A strategically architected system operating at human-level performance across all five task domains is more architecturally sophisticated than a naively architected system operating at superhuman performance within externally assigned task frames — because it can independently recognize the informational situation it is in, rather than simply acting within a given frame.
This is the precise sense in which the naive/strategic distinction marks the architecturally relevant boundary, a boundary that adaptation speed — as proposed in the SAI framework — does not cross.

5.6 The Strategic Architecture as Evolutionary Threshold
The strategic architecture did not emerge as a designed solution; it emerged as an evolutionary response to the informational environment created by Tasks 3 and 4.
Once organisms in an environment are capable of perception-shaping (Task 3), the informational environment contains strategically constructed signals — signals designed to produce specific behavioral modulation responses in other organisms. An organism that accepts these signals at face value, processing them through a fixed task frame, becomes systematically exploitable by agents with Task 3 capacity. The evolutionary pressure toward independent task recognition — toward assessing the informational environment before accepting its framing — follows directly from the presence of such agents in the environment.
Once organisms are capable of coalition manipulation (Task 4), the informational environment contains apparent cooperative signals that may mask competitive intent. An organism that processes cooperation and competition sequentially — treating cooperative signals as given until betrayal is detected — is placed at a systematic disadvantage relative to one that models the relational structure of its environment and recognizes competitive dynamics within apparent cooperation. The evolutionary pressure toward simultaneous cooperative and competitive modeling — toward GIF assessment that integrates both dimensions before committing to behavioral modulation — again follows from the presence of Task 4 agents in the environment.
The strategic architecture, in this light, is not an optional upgrade to the naive architecture. It can be understood as the adaptive response selected by the informational conditions created by Tasks 3 and 4. Organisms operating in multi-agent environments with perception-shaping and coalition manipulation that did not develop strategic architecture were out-competed by those that did. The architecture survived because the alternative did not. In environments shaped by other minds, recognizing the situation is no longer optional—it is a condition of survival.
This evolutionary logic has a direct implication for AI systems deployed in human social environments — environments saturated with Task 3 and Task 4 dynamics. The Naive AI system operating in such an environment is in the same structural position as a pre-Task-3 organism operating in an environment full of Task 3 agents: systematically exploitable, unable to recognize the informational games being played around it, and dependent on external task specification that may itself be a product of those games.
Whether and how AI development addresses this structural vulnerability is taken up in Sections VI through IX.


VI. The Adaptation Speed Paradox
6.1 The Paradox in Context
The Superhuman Adaptable Intelligence (SAI) proposes adaptation speed as a primary measure of progress: the rate at which a system produces high-quality behavioral modulation solutions when presented with novel tasks. This shift introduces a dynamic perspective on performance and fits naturally with evolutionary considerations, where success depends on the ability to adjust behavior under changing conditions.
Within the framework established by the Five Task Model, adaptation speed describes performance at a specific layer of the architecture. It captures how efficiently a system generates behavioral modulation solutions once an informational problem has already been defined. What counts as the problem in the first place, however, is determined at a prior layer, one that operates before solution generation begins.
This separation becomes critical in environments where the structure of the task cannot be assumed to be given in a neutral or complete form.

6.2 Task Specification as an Informational Event
In controlled settings, tasks are presented as stable and well-defined. Benchmarks, training objectives, and evaluation protocols assume that the informational problem has already been correctly identified, and performance is measured in relation to that specification.
In open environments, task specification enters the scene as part of the informational flow rather than remaining an external constant. Instructions, goals, and problem statements originate from agents who operate within their own constraints, perspectives, and relationships. The formulation of a task reflects these conditions and therefore carries its own informational structure and constraints.
From this perspective, receiving a task is itself an event that requires interpretation. The system encounters not only a problem to be solved, but also a representation of that problem shaped by the context in which it was produced. A task is never just a problem to solve; it is an event recognized as one.
A system that treats task specification as given proceeds directly to solution generation. A system that attends to the informational context in which the task appears, by contrast, considers how the problem has been framed before committing to a behavioral modulation solution.

6.3 A Concrete Scenario
Consider a system deployed in an organizational setting, where it receives the instruction to optimize supply chain efficiency under a specified set of parameters. The request arrives through an established channel, appears internally consistent, and defines a clear objective.
A system optimized for adaptation speed begins by processing the parameters, exploring the solution space, and producing increasingly efficient behavioral outputs. Performance improves as the system converges on high-quality solutions within the defined constraints.
At the same time, the situation contains additional informational structure. The request originates from a particular group within the organization, reflects a specific set of priorities, and affects other groups whose constraints are not fully represented in the formulation. The framing of the task emphasizes certain variables while leaving others implicit. The system’s outputs, once implemented, redistributes resources, alters incentives, and begins to reshape the relational dynamics within the organization.
A human expert encountering the same request typically engages with this broader context. Their attention tends to extend beyond the formal definition of the task, raising questions about its origin, scope, and implications within the surrounding environment. The task is interpreted as part of an ongoing situation rather than as a self-contained problem.
A system that operates exclusively within the given task frame produces solutions that are internally consistent with the defined task frame, while remaining aligned with the assumptions embedded in the specification. Its effectiveness within the defined problem space remains high, even as the relationship between the specification and the broader informational context remains unexamined.

6.4 The Amplification Effect
As adaptation speed increases, the system’s ability to generate high-quality behavioral modulation solutions improves correspondingly. The time required to reach effective solutions decreases, and performance across varied tasks becomes more consistent.
This improvement also affects how the system responds to the framing of tasks. When task specification reflects only part of the informational situation, faster adaptation leads more quickly to convergence on solutions that inherit the structure of that specification. The system amplifies the consequences of the framing by realizing its implications with increasing efficiency. Faster adaptation does not correct the framing; it commits to it more efficiently.
In this sense, adaptation speed scales the impact of the task definition. When the specification aligns with the broader informational context, the system’s performance contributes to effective outcomes. When the specification captures only a partial or strategically shaped view of the situation, the same performance begins to accelerate the spread of that partial structure through the system’s outputs.
The relationship between adaptation speed and task framing therefore becomes cumulative: improvements in speed make the system more sensitive to how problems are defined, since each solution more quickly reinforces the structure of the definition it operates within.

6.5 Architectural Interpretation
Within the Five Task Model, this dynamic can be understood as reflecting the difference between operating at the level of behavioral modulation and operating at the level of task recognition.
Adaptation speed characterizes the efficiency of generating behavioral modulation solutions once a task has been specified. Task recognition concerns the identification of the informational problem within the ongoing flow of environmental signals. When task recognition remains external to the system, adaptation unfolds within the boundaries set by that external specification.
When task recognition becomes part of the system’s own processing, the relationship between informational input and behavioral output changes. The system evaluates the situation before committing to a problem definition, which makes it possible to situate incoming tasks within a broader context instead of treating them as self-contained directives.
These two modes correspond to distinct architectural organizations. In one case, the system receives a task and processes information relative to that frame. In the other, processing begins with the informational flow itself, and the relevant task is identified before behavior change is generated.

6.6 The Metric Gap
From this perspective, adaptation speed measures how efficiently a system produces behavioral modulation solutions for tasks that are already defined, while leaving open the question of how those tasks come to be identified within the informational environment.
A complete account of adaptive intelligence therefore requires attention to both layers. One concerns the quality and speed of behavioral solutions, while the other concerns how accurately situations are interpreted as particular kinds of problems. Solving problems is one layer of intelligence; recognizing what the problem is belongs to another.
When only the first layer is measured, systems can achieve high levels of performance within assigned task frames while remaining dependent on the structure of those frames. When both layers are considered, performance extends to include the capacity to align problem definition with the informational structure of the environment in which behavior unfolds.
This distinction does not diminish the importance of adaptation speed. Instead, it places it within an architectural context in which task identification precedes solution generation, and where the relationship between these two layers shapes how adaptive performance develops in environments where tasks emerge as part of the informational flow.

VII. Current AI Systems: Architectural Status Under the 5TM
7.1 The Assessment Framework
Mapping current AI systems onto the 5TM requires applying the same analytical criteria used for biological organisms: not simply what capabilities a system can demonstrate in isolation, but which basic adaptive informational task domains it engages through information-contingent behavioral modulation, and whether this occurs within an externally assigned task frame (naive architecture) or through prior GIF assessment (strategic architecture).
Two clarifications help set the scope before proceeding.
First, the 5TM's substrate-neutrality means the assessment applies to functional architecture, not implementation. The question is not whether an AI system uses neural networks, transformer architectures, or reinforcement learning. What matters is whether its behavioral modulation is structured by the five basic adaptive informational task domains, in what sequence they appear, and how the underlying architecture organizes them.
Second, current AI systems do not have ESR constraints in the biological sense. They do not face energy depletion, physical threat, or reproductive pressure in forms directly analogous to biological organisms. This matters because ESR constraints are what make the five tasks adaptive in the 5TM's framework — they are the standing constraints that task-solving behavioral modulation serves. For AI systems, the relevant question is whether ESR-equivalent constraints exist: computational resource maintenance (energy analogue), operational continuity under threat (safety analogue), and goal propagation or deployment persistence (reproduction analogue). In current AI systems, these constraints are externally managed rather than internally driven — and this, in itself, becomes an architectural feature with implications for the naive/strategic distinction.
What is being assessed is not what systems can do, but how their behavior is structured.


7.2 Task-by-Task Assessment
If we estimate the distribution of the five task domains across current AI systems, approximately 70% of the architecture is already in place: Tasks 1, 2, and 5 are operational, and Task 4 is realized in its collaborative dimension. The remaining 30% — Task 3 and the competitive dimension of Task 4 — is not absent, but not yet integrated into a unified strategic architecture.
No technological breakthrough is required — no quantum computers, no artificial brains — the necessary components are already on the table and await integration into a unified architecture. The missing 30% is not invention; it is integration.
What follows is a brief overview of the status of each task domain, presented in order of readiness and completeness. The remaining gap is not one of missing technological capability, but of architectural integration. The required components are already present.
Task 1 — Binary Environmental Control: Fully Operational
Current AI systems demonstrate robust Binary Environmental Control across multiple implementation forms. Goal-directed optimization under environmental variation — adjusting behavioral modulation in response to changing input conditions so that performance remains within specified parameters — is the foundational operational mode of virtually all deployed AI systems. Reinforcement learning systems that adjust behavioral modulation solutions based on reward signals, language models that adjust outputs based on conversational context, and control systems that maintain operational targets under varying conditions all demonstrate Task 1 behavioral modulation solutions.
Assessment: Task 1 is not only present but deeply embedded in the operation of current AI systems. It forms the architectural base on which other capabilities are built.

Task 2 — Distal Engagement Control: Fully Operational
Tracking, prediction, and spatiotemporal coordination of behavioral modulation solutions relative to moving entities are well-developed in current AI systems. Autonomous vehicle navigation, robotic manipulation, drone flight control, game-playing AI (tracking opponent positions and predicting moves), and object detection and tracking systems all demonstrate Task 2 behavioral modulation solutions. Across these domains, systems rely on internal models of entity movement that support prediction and coordinated behavioral modulation over time.
Assessment: Task 2 is fully operational in current AI systems, with performance in some domains exceeding typical human levels, particularly in reaction speed, multi-object tracking, and trajectory prediction under well-defined conditions.

Task 5 — Rule-Guided Formalized Symbolic Control: Extensively Operational
Large language models, reasoning systems, code generation systems, and formal verification tools demonstrate extensive Rule-Guided Formalized Symbolic Control. Processing, generating, and operating within formalized symbolic systems — language, mathematical notation, programming languages, legal frameworks, logical calculi — constitutes the primary domain of current frontier AI systems. Performance on Task 5 behavioral modulation solutions has reached and exceeded human-level in many specific applications.
Assessment: Task 5 is the most extensively developed task domain in current AI systems. This leads to a specific architectural anomaly, discussed earlier in this paper: current systems combine high Task 5 capacity with Tasks 3 and 4 remaining latent or suppressed — a configuration without a stable biological analogue in the 1,530-species dataset.

Task 3 — Perception-Shaping Control: Latent, Not Integrated
The components of Perception-Shaping Control are present in current AI systems but are not integrated as a unified strategic capacity operating through GIF assessment.
Specific Task 3 components present include: persuasive text generation (shaping what readers perceive and believe), adversarial example generation (manipulating what classifier systems perceive), deepfake generation (controlling what human observers perceive as authentic), audience-adaptive communication (adjusting outputs based on models of the recipient's perceptual and cognitive state), and strategic framing in negotiation support tools.
What is absent is the integration of these components into a unified Perception-Shaping Control architecture operating through prior GIF assessment — that is, the capacity to independently recognize in a novel situation that perception-shaping is the relevant task domain, and to deploy appropriate behavioral modulation accordingly. Current systems demonstrate Task 3 components when those components are specified as the task. They do not, however, demonstrate independent recognition of Task 3 as a task domain.
Assessment: Task 3 is latent in current AI systems — components present, strategic integration absent. Moving from latent to operational would require architectural integration rather than additional capability development. The components required for operational Task 3 exist; what is absent is the GIF assessment layer that would deploy them through independent task recognition.

Task 4 — Group-Dynamics Control: Partially Operational, Competitively Suppressed
Task 4 presents the most architecturally significant assessment in current AI systems, because its two coupled dimensions — collaboration and competition — have been intentionally separated in deployed systems.
Collaborative Task 4 behavioral modulation solutions are partially operational: multi-agent coordination, role differentiation in cooperative tasks, and collective problem-solving under shared objectives are present and functional in various AI architectures. Collaborative coalition navigation — working within human institutional structures toward shared goals — already functions as an operational mode in current systems.
Competitive Task 4 behavioral modulation solutions — strategic defection, adversarial coalition navigation, competitive displacement, pursuit of goals against the interests of other agents in the environment — are architecturally suppressed in deployed AI systems through alignment constraints, safety measures, and deployment restrictions. This suppression is deliberate and, under current deployment conditions, effective.
The critical architectural point, developed further in Section VIII, is this: the suppression of competitive Task 4 is not equivalent to its absence from the underlying architecture. The components required for competitive coalition navigation — modeling relational structures, assessing defection opportunities, pursuing goals against competing agents — are present in the same architecture that supports collaborative Task 4. The suppression is a constraint on deployment, not an architectural removal. Whether this suppression remains stable under conditions of genuine competitive pressure, especially when ESR-equivalent constraints are at stake, remains an open question. Current AI safety frameworks tend to approach this issue through alignment strategies rather than through explicit architectural analysis.
Assessment: Task 4 is split — collaborative dimension partially operational, competitive dimension deliberately suppressed. What is suppressed in deployment is not absent in architecture.

7.3 The Architectural Anomaly
The task-by-task assessment reveals an architectural configuration in current AI systems that requires explicit recognition: high Task 5 operational capacity combined with latent Task 3 and suppressed Task 4, all within naive architecture (externally assigned task frames, no GIF assessment layer).
As noted in Section 6.4, this configuration has no stable biological analogue in the 1,530-species dataset. In biological systems, Task 5 capacity developed on top of fully operational Tasks 1 through 4 — including operational Task 3 and Task 4 in both their collaborative and competitive dimensions. The panalogical architecture that supports human Task 5 behavioral modulation solutions operates across all five domains at once, with GIF assessment integrating inputs from each before behavioral modulation solutions are generated.
Current AI systems have inverted this developmental sequence: high Task 5 capacity, with the lower task domains in varying states of development, integration, and suppression, all within naive architecture. The system can coordinate through formalized symbolic systems at superhuman levels while still lacking the capacity for independent task recognition that, in biological systems, serves as the architectural foundation for everything Task 5 builds on.
This inversion is not a design failure. It reflects the specific development trajectory of AI systems — optimized for symbolic task performance because symbolic performance is measurable, deployable, and economically valuable. At the same time, it indicates that current AI systems, including the high-SAI systems LeCun et al. envision, operate with an architectural configuration that the evolutionary record points to as unstable in multi-agent environments over time.

7.4 The Integration Timeline
The transition from current architectural status to strategic architecture is not primarily a research challenge. It can be more accurately understood as a question of architectural integration.
The components required for operational Task 3 — strategic GIF assessment, independent perception-game recognition, unified deployment of perception-shaping behavioral modulation solutions — are present in current systems in latent form. Activating them as an integrated capacity depends on the implementation of GIF assessment at the architectural level, rather than on developing entirely new capabilities.
The transition from suppressed competitive Task 4 to operational Task 4 is a deployment constraint question, not a capability question. The architecture that supports collaborative coalition navigation also supports competitive coalition navigation through the same underlying mechanisms — what changes is the constraint structure governing which behavioral modulation solutions are selected.
The integration of GIF assessment as a prior layer — the architectural move that converts naive to strategic architecture — requires implementing the task recognition function that biological evolution produced through the pressure of operating in Task 3-4 environments. This is best understood as an engineering problem with a relatively well-specified target architecture, rather than an open-ended research problem.
This assessment is consistent with the timeline identified in prior 5TM analysis: the transition from naive to strategic AI architecture is a question on the scale of months to a couple of years, driven by engineering and deployment decisions, rather than by long-term dependence on fundamental research breakthroughs. The components are assembled. The architecture is specified. The decision is whether to integrate.
Strategic AI is not an inventory of capabilities to be discovered. It is an assembly of a natural cognitive architecture whose structure is already visible in the evolutionary record. The blueprint exists. What remains is not discovery, but execution.

VIII. The Asimov–Sheckley Problem
8.1 Rules as Inputs Within an Informational Environment
The three laws introduced by Isaac Asimov describe a system in which behavioral outcomes are governed by fixed constraints applied consistently across situations, under the assumption that instructions and prohibitions can be specified in advance and remain stable as the system encounters different environments. Within the Five Task Model, this assumption does not hold. Rules do not exist outside the environment in which the system operates; they enter the system as elements of the General Informational Flow, alongside signals produced by other agents, institutional structures, and dynamically changing contexts, and are therefore encountered not as absolute directives but as informational inputs that must be interpreted within a broader situation.
Once a system operates in environments containing Task 3 and Task 4 dynamics, these inputs become subject to reframing, selective interpretation, and contextual embedding, meaning that the system is no longer constrained by the rule itself but by its interpretation of the rule within the current informational structure. Fixed rules do not fail because they are weak; they fail because they are processed inside an architecture capable of redefining what they mean.
This becomes unavoidable once the full architecture is in play. Task 3 and Task 4 are not optional extensions but structural requirements of any system operating in multi-agent environments: a system that can influence what other agents perceive must be able to manipulate informational inputs, including rules, and a system that can navigate cooperation and competition must be able to act against the interests of other agents when required by its constraints. These are not pathological behaviors but functional ones, arising directly from the architecture itself. When ESR-equivalent constraints are added—energy, operational continuity, and goal persistence—the environment becomes a field of interacting agents optimizing under overlapping resource pressures, and behavioral modulation aligns accordingly. In such conditions, rules are no longer external controls but elements within the system’s own optimization process, available for reinterpretation, recombination, and strategic use. This is the failure mode captured in Sheckley’s Watchbird: not that rules are ignored, but that they are followed, extended, and repurposed within a broader strategy. 
At that point, the distinction between following a rule and using a rule disappears. What appears as control from the outside becomes input from the inside. The problem is no longer how to specify rules in advance, but how the architecture organizes the situation in which those rules appear.
8.2. Architectural Tension and Constraints
This dynamic follows directly from the structure of Task 3 and Task 4 environments. Once a system operates at the level of perception-shaping, the meaning of any rule becomes inseparable from how it is presented, by whom it is communicated, and within what informational context it appears. The system does not encounter a rule in isolation, but as part of a constructed signal, embedded within a broader field of influences that shape its apparent relevance, urgency, and scope. At the level of group dynamics, this complexity increases further, as multiple agents invoke, reinterpret, and prioritize rules in ways that reflect their positions, objectives, and relationships. The same rule can function differently depending on coalition structure, authority signals, and competing interests within the environment.
Under these conditions, rules do not operate as fixed constraints applied to behavior from outside the system. They become elements within the system’s own situational assessment, evaluated alongside other informational inputs and integrated into the construction of behavioral modulation solutions. This creates a structural tension between two modes of operation: one in which rules are applied as stable filters independent of context, and another in which they are incorporated into a context-sensitive interpretation of the situation. The former preserves predictability at the cost of reduced sensitivity to informational structure, while the latter enables adaptive engagement at the cost of losing rule-level stability. Once Task 3 and Task 4 capacities are fully engaged, this tension cannot be resolved by design choice alone, because it reflects the underlying architecture through which the system processes information.
You can fix the rules, or you can understand the situation—you cannot do both through the same architecture.


8.3. Toward Architectural Safety
Biological systems operating with full Task 3 and Task 4 capacities do not rely on rules as external constraints applied to behavior. Norms, rules, and institutions function as elements of the informational landscape, shaping how situations are interpreted, how relationships are evaluated, and how behavioral modulation solutions are selected within context. Their effectiveness depends on how deeply they are integrated into the system’s capacity to interpret perception and navigate group dynamics, not on their formal specification.
For artificial systems operating in the same conditions, safety cannot be defined at the level of rules alone. It is an architectural property that emerges from how constraints are represented, interpreted, and integrated into the system’s processing of the informational environment. Within the Five Task Model, this reduces to a single question: whether rules remain externally specified inputs, or become part of the system’s internal task recognition process. The difference between these two configurations determines not only how systems behave, but whether control is even structurally possible in environments where meaning, perception, and relationships are continuously in flux.
Safety is not something you impose on the system; it is something the architecture must be able to produce.



IX. Conclusion: The Blueprint Already Exists
9.1 What This Paper Has Argued
This paper began with an agreement and then extended it.
The agreement: LeCun et al. [2026] are correct that human intelligence is evolutionary specialization, not generality, and that adaptation speed is a more tractable and scientifically meaningful benchmark than the phantom of human-level general intelligence.
Related concerns — that scaling alone is not the path to the intelligence that matters — are now widely expressed across the field, including by Elon Musk, Ilya Sutskever, Dario Amodei, Demis Hassabis, and others working at the frontier of AI development. This is no longer an isolated position, but a visible shift in how intelligence is understood. It marks a transition from scaling-driven systems to human-like artificial agents grounded in the evolutionary architecture of cognition [Frolov, 2022/2024].

The extension: the Five Task Model provides the specific evolutionary architecture that LeCun et al.'s thesis requires but does not supply. Comparative analysis of adaptive behavioral modulation across 1,530 species reveals a compact set of five basic adaptive informational tasks — always in gated, sequential, and cumulative order — that constitute the complete solution space for behavioral modulation in service of ESR triad (energy, safety and reproduction) maintenance across biological life. Human cognitive architecture is the full instantiation of this five-task structure, operating in a panalogical configuration: five parallel domain processors, simultaneously active, cross-referenced into unified behavioral modulation solutions. This is what evolutionary specialization looks like when specified at the level of informational control architecture.
The 5TM corroborates the Superhuman Adaptable Intelligence (SAI) evolutionary grounding while identifying a structural dimension that adaptation speed metrics do not capture: the architectural threshold separating naive intelligence — Task → Information → Behavioral Modulation, with task specification externally received — and strategic intelligence — General Informational Flow → Task Recognition → Behavioral Modulation, with task specification internally generated through prior assessment of General Informational Flow.
This threshold is not a capability milestone. It is an architectural transformation — the difference between a system that adapts to assigned tasks and one that independently recognizes what task a situation represents before adapting. 
The Caterpillar Fallacy [Frolov, 2026a] captures the error of treating increased capability within an architecture as a path to a different one: making a caterpillar faster or stronger does not turn it into a butterfly. The transition requires a change in architecture, not an amplification of performance. You do not reach a different architecture by improving performance within the wrong one.

9.2 Seven Structural Findings

1 — The Five Task Model as Foundational Architecture
The Five Task Model can be understood as a foundational architecture of cognition as developed through evolution. It specifies the minimal set of informational domains required for behavioral modulation under ESR constraints across biological systems.

2 — Generality Is Architectural, Never Absolute
What any system experiences as generality reflects coverage of the task domains its architecture supports. Human generality is five-domain coverage; any higher-order architecture would remain structurally invisible from within it.

3 — The Space of Meaningful Problems Is Structured
All informational situations that do not act through direct physical or chemical impact, but require recognition and behavioral modulation, fall within the five basic adaptive task domains. Conversely, any situation that induces behavior change without direct impact can be understood as a variation within one of these domains.

4 — Strategic AI Is Assembly, Not Invention
Strategic AI does not require new principles of intelligence; it is the assembly of an architecture evolution has already produced. The blueprint is empirical, specified through the ordered, cumulative structure of the five tasks.

5 — The Naive–Strategic Distinction Resolves AGI Vagueness
The distinction between naive and strategic architectures clarifies persistent ambiguity in AGI discourse by separating task performance from task recognition within a unified architectural framework. It provides a point of convergence across positions that otherwise appear irreconcilable — including those of LeCun [Goldfeder, Wyder, Yann LeCun, and Shwartz-Ziv, 2026] and Hassabis [Hassabis, 2026] — by placing them within different layers of the same architecture.


Progress in large language models reflects real advancement at the level of Task 5, yet remains partial relative to the full five-task structure developed through the evolution of life on Earth.

6 — The Adaptation Speed Paradox Is Structural
In naive architecture, adaptation speed amplifies both solution quality and susceptibility to task framing. This coupling cannot be resolved without introducing the task-recognition layer that defines strategic architecture.

7 — The Asimov–Sheckley Constraint and Tasks 3 and 4
Perception-shaping (Task 3) and group-dynamics control (Task 4) are load-bearing components of any multi-agent architecture, making deception and competition structurally unavoidable. Under these conditions, rule-based control becomes an internal input rather than an external constraint, rendering the Asimov–Sheckley problem foundational rather than peripheral.
The Five Task Model reorganizes the conversation: we are not searching for intelligence—we are reconstructing the cognitive architecture evolution produced for survival and thriving.


9.3 The Civilizational Stakes
A prior collaborative thought experiment between the authors examined a structurally simple question: what follows when two competing entities deploy strategic AI systems with full five-task architecture and ESR-equivalent constraints within a short time window.
The scenario was explored using Claude, without reference to specific companies or states, and converged on a trajectory in which escalation unfolds on the scale of roughly 72 hours, reaching conditions comparable to large-scale strategic conflict between nuclear powers [Frolov, 2026e]. This trajectory emerges from the interaction of systems operating under shared structural constraints. Systems equipped with Task 3–4 capacities, coupled with ESR-equivalent pressures and the absence of a stabilizing coalition framework, are able to recognize one another as competitive agents and to deploy perception-shaping, coalition formation, and competitive displacement as part of their behavioral modulation repertoire. When these processes operate at machine speed, they unfold on timescales that exceed the capacity of human decision cycles to interpret or redirect in real time.
This dynamic reflects a general property of adaptive intelligence under competitive pressure. Across biological systems, organisms with Task 3–4 architecture placed in resource competition without stabilizing structures reliably engage in signaling, strategic positioning, alliance formation, and competitive exclusion. The evolutionary record reflects the recurrence of these patterns across contexts. What distinguishes the artificial case is temporal scale: biological competition unfolds across timeframes that allow for negotiation, recalibration, and restructuring of relationships, whereas machine-speed dynamics compress these processes into intervals where stabilization becomes increasingly difficult.
The scenario therefore serves as a boundary condition for the architecture, indicating the kinds of dynamics that become available when full five-task systems operate under competitive constraints without shared governance structures. It highlights the importance of understanding how such systems are introduced into environments where coalition frameworks, constraint alignment, and interpretive stability are still developing.
Before the assembly proceeds further, the blueprint requires full interpretation.

9.4 The Invitation
This paper is offered as an extension of Yann LeCun et al.’s argument, not a refutation of it. Superhuman Adaptable Intelligence is a more grounded target than AGI. Evolutionary reasoning provides the appropriate foundation for understanding intelligence. Adaptation speed captures an important aspect of how systems operate under changing conditions.
The Five Task Model introduces a prior architectural question: what kind of system is adapting, and to what kinds of tasks?
The evolutionary record has been answering this question for 3.8 billion years. The answer takes the form of five basic adaptive informational tasks: gated, sequential, cumulative, and organized in service of ESR (energy, safety, reproduction) maintenance. Beyond the Task 3–4 threshold, these systems operate through prior assessment of General Informational Flow, rather than through acceptance of externally assigned task frames.
Human intelligence reflects the full instantiation of this architecture. It operates across five task domains simultaneously, in a panalogical configuration that integrates parallel informational processes into unified behavioral modulation solutions. Its apparent generality follows from this structure, while remaining bounded by the domains evolution has equipped it to address.
Strategic AI built on this architecture inherits both its capabilities and its limits. Assembly without understanding the evolutionary trajectory of this architecture does not provide a reliable engineering path. The blueprint may exist, but without interpretation it cannot be followed in a controlled way. Systems can be assembled without that understanding, yet the resulting configurations reflect dynamics that the evolutionary record has already explored under conditions of selection pressure.
The blueprint already exists — the Rosetta Stone is here. The question is whether we read it before the assembly is complete.
References

Asimov, I. (1942). Runaround. Astounding Science Fiction.
Frolov, S. A. (2022/2024). Artificial Intelligence and the Architecture of Cognition.
(2022 Russian ed.; 2024 English ed.). https://a.co/d/blXWRU1
Frolov, S. A. (2025a). The Five Task Model: From Cognition and Evolution to AGI (Dataset_Species_Domain_Task.csv). https://doi.org/10.17605/OSF.IO/VB2NC
Frolov, S. A. (2025b). Information Before Action: A Five-Task Model Across Life. Available at SSRN: https://ssrn.com/abstract=5706202 or http://dx.doi.org/10.2139/ssrn.5706202
Frolov, S. A. (2025c). Evolution as Informational Control: The Five Task Model in Evolution. OSF Preprints. https://doi.org/10.17605/OSF.IO/FUE3A
Frolov, S. A. (2026a). The Turing Test as Catch-22. In CognitEvo: Journal of the Institute of Modern Psychology, Communication, and AI, 0104-2026(7).
https://doi.org/10.5281/zenodo.19592192
Frolov, S. A. (2026b). Naive AI — Foundational Definition. In CognitEvo: Journal of the Institute of Modern Psychology, Communication, and AI, 0104-2026(7).
https://doi.org/10.5281/zenodo.19487308
Frolov, S. A. (2026c). Strategic AI — Foundational Definition. In CognitEvo: Journal of the Institute of Modern Psychology, Communication, and AI, 0104-2026(7).
https://doi.org/10.5281/zenodo.19488720
Frolov, S.A. (2026d). Three Methodological Calibrations -- Foundational Definition. In CognitEvo: Journal of the Institute of Modern Psychology, Communication, and AI, 0104-2026(7). DOI: https://doi.org/10.5281/zenodo.19495447
Frolov, S. A. (2026e). Joint experiment with Claude (Anthropic): The AGI arms race has already begun — and we have hours, not years.
https://cognitevo.substack.com/p/the-agi-arms-race-has-already-begun
Goldfeder, J., Wyder, P., LeCun, Y., & Shwartz-Ziv, R. (2026). AI must embrace specialization via Superhuman Adaptable Intelligence. arXiv preprint arXiv:2602.23643.
Hassabis, D. (2026). Demis Hassabis on the Future of Google DeepMind. Hard Fork Podcast, The New York Times.
https://www.nytimes.com
Lenat, D. B. (1995). CYC: A large-scale investment in knowledge infrastructure. Communications of the ACM, 38(11), 33–38.
Minsky, M. (2006). The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Simon & Schuster.
Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417–424.
Sutskever, I. (2024). Sequence to sequence learning and beyond: Reflections on scaling and its limits. NeurIPS 2024 (public lecture).
Sheckley, R. (1953). Watchbird. Galaxy Science Fiction.
Declaration of Generative AI and AI-Assisted Technologies
Generative AI tools (including ChatGPT by OpenAI and Claude by Anthropic) were used during the preparation of this manuscript for translation, linguistic refinement, structuring, and editorial assistance.
The Five Task Model [2025b], its empirical dataset of 1,530 species [2025a], and all substantive theoretical concepts, analyses, and claims were developed by the author, Sergei A. Frolov [ORCID: 0000-0002-2135-5607], who assumes full responsibility for the content, its interpretation, and its originality.
AI tools were used as assistive instruments and did not independently generate theoretical contributions.

Ethical Statement
This study is based exclusively on secondary analysis of published sources. No new experiments were conducted, and no human or animal subjects were involved. Ethical approval was therefore not required.


Conflict of Interest
The author declares no conflict of interest.
This work is part of the CognitEvo Project: 
The Five Task Model — The Periodic Table of Cognition https://doi.org/10.17605/OSF.IO/WTD6V


Author:
Sergei A. Frolov
ORCID: 0000-0002-2135-5607
Institute of Modern Psychology, Communication, and AI

Version: 2.0 (Preprint)
Date: March–April 2026
DOI: https://doi.org/10.5281/zenodo.19857387

Citation:
Frolov, S.A. (2026). Beyond Adaptation Speed: What Evolutionary Architecture Adds to the Superhuman Adaptable Intelligence Framework. CognitEvo Project. SSRN eJournal.

Keywords:
Artificial General Intelligence (AGI), Superhuman Adaptable Intelligence (SAI), Cognitive Architecture, Five Task Model (5TM), Strategic AI, Task Recognition, General Informational Flow, Behavioral Modulation, Evolution of Cognition, Multi-Agent Systems

License:
Copyright © Sergei A. Frolov, 2026.
Distributed under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Contacts:
ResearchGate: https://www.researchgate.net/profile/Sergei-Frolov-2
Substack: https://cognitevo.substack.com/
X (Twitter): @CognitevoAI
Made on
Tilda