Conceptual Autonomy of Agents

Reference:

Timo Honkela. Conceptual autonomy of agents. In ICAART 2009 - Proceedings of the International Conference on Agents and Artificial Intelligence, page 9. INSTICC Press, 2009.

Abstract:

Abstract for keynote presentation: It is commonplace to view autonomous agents by considering how much freedom they exhibit in their goal-directed behavior. In other words, the level of motivational autonomy is then in focus. In this presentation, I will discuss, however, why and how the conceptual autonomy of agents is another crucial issue in the field of multiagent system development. In many systems, the agents are provided with a shared conceptual ground. This poses severe limitations for the autonomy of the agents. Namely, it is a reasonable requirement that autonomous agents should be able to interact with each other robustly in open-ended changing environments. As each agent has its own developmental history and as the environments vary to some extent from agent to agent, it is necessary for each agent to build a model of its environment in an individual manner. On the other hand, the agents need to compare these models with each other to enable shared conceptual ground. This can realistically take place through communication in partially shared contexts leading into a sufficient convergence of mapping between the language used by the agents and their individual conceptual systems. In the presentation, I will also discuss what kind of implications these considerations have for the area of artificial intelligence in general.

Suggested BibTeX entry:

@inproceedings{HonkelaICAART09,
    author = {Timo Honkela},
    booktitle = {ICAART 2009 - Proceedings of the International Conference on Agents and Artificial Intelligence},
    pages = {9},
    publisher = {INSTICC Press},
    title = {Conceptual Autonomy of Agents},
    year = {2009},
}

This work is not available online here.