A bit more reflection on how AWE flight automation will progress-
Early experimental AWE effectively treats the end-to-end system as a
single intelligent "agent." It will be increasingly useful to define
multi-agent control architectures. JoeF has initiated a classification
project for AWECS. A valuable role of a formal knowledge-based ontology
is toward semantically interfacing low-level control as agents in a
complex kitefarm. One would program and debug the multi-agent
kite-farm in a high-level language like CycL*, an extended
An obvious place to first introduce a multi-agent model is for a
"smart-reel" and a "smart kite" to each have distinct agency, but know how
to cooperate. Thus one would change kites or reels according to need and
expect the shifting agents to still work together thru a well-defined
interface. Many failure-modes would then be logically "fire-walled." The
reel might jam or the tether part, yet the kite would know how to land
itself. The kite agent might "lock-up" in a default stable flight mode,
and yet the reel would know how to bring the kite in like a pro. A tether
has quasi-agency, especially if thick or heavy, but there is no smart
tether with actuation capabilities yet. Thus the tether is not
initially a full agent, but a noisy communication interface between reel
and kite, which would each carry custom protocols for the set of tether
states and options. Similarly, the wind acts as a high-complexity
quasi-agent, or even a multi-agent, as an AWECS hunts for "cooperative"
parts of the wind-field.
There will be many advantages to multi-agent control. Dense-array flocking
behaviors naturally emerge if every kite agent has a few simple rules to
follow. A mature AWE automation environment will include full human-agent
models, not just the basic manual override of the early systems. One can
expect advanced flight automation agents to resist human error and abuse,
just as the latest airliners do not let a pilot deliberately crash.
Next: Defining the top-level AWECS ontology.
~Dave Santos Nov.
Comment and development of this topic will be occurring here.
All, send notes, drawings, and photographs!
Terms and aspects:
Commentary is welcome:
1) Lack of an Adequate Domain Model- Its impossible to write good
enough code if the problem (and hardware) is not fully understood and
Safety-critical code must be to "clean-room" standards, and
3) Missing Data- No truly adequate
data exists for real windfields, and far less for the dynamic interaction
of a kite with real wind.
4) Hyper Chaos- Multiple sources of chaos multiplied together
(windfield, kite (as multi-pendulum),
system failure-modes, forecasting horizon)
5) Sensor Uncertainty- "Soda-straw" view, error, decalibration, noise, latency,
6) Computational Intractability- Inherent mathematical intractability
and excess latency.
7) Exception Handling- Completely unforeseen events
and kite "saves" that human masters are unbeatable at.
8) I Forget.
9) We'll find out...
Human supervised partial autonomy ("simple" autopilots and support
systems) remains the only current practical option. Passive stability
is smart. KiteLab's toy-scale passive automation AWECs seem to be the
only working exceptions (including self-relaunch), but they can't
scale up safely without adding human supervision. Fortunately
economy-of-scale will kick in to pay pilots. At gigawatt scales,
piloting cost becomes a small fraction of operating expense.