Cognitive Agent Modeling

Add by mariopaolucci | Nov 27, 2014 08:42  1450 |  40
Cognitive Agent Modeling
Download

Map Outline

Cognitive Agent Modeling
1 What is cognition?
1.1 Cognitive Psychology
1.1.1 Study of mental processes
1.1.2 Intentional Stance (Dennet)
1.1.2.1 Example: reputation
1.1.2.1.1 Folk psychology
1.1.2.1.1.1 responsibility bias
1.1.2.1.2 Psychology
1.1.3 Examples
1.1.3.1 priming
1.1.3.2 Theory of social evaluation
1.1.3.3 Social bonding
1.1.3.4 George A. Miller's 1956 Psychological Review article "The Magical Number Seven, Plus or Minus Two"
1.2 What is left out?
1.2.1 Biological processes
1.2.1.1 neural networks
1.2.1.1.1 (but cognitive neuroscience exists)
1.2.1.2 drugs
1.2.1.3 illness
1.2.2 No-mind approaches
1.2.2.1 behaviourism
1.2.2.1.1 Cognitive revolution (1950) as an interdisciplinar movement to focus on the mind
1.2.2.1.1.1 psychology, anthropology, linguistics, artificial intelligence, computer science, and neuroscience.
1.2.2.1.1.2 A key idea in cognitive psychology was that by studying and developing successful functions in artificial intelligence and computer science, it becomes possible to make testable inferences about human mental processes. This has been called the reverse-engineering approach. (wikipedia)
1.2.3 Usage differs, for example as for the place of emotions
1.2.3.1 "The literature on peer review has focused almost exclusively on the cognitive dimensions of evalu- ation and conceives of extracognitive dimensions as corrupting in- fluences. In my view, however, evaluation is a process that is deeply emotional and interactional." (Lamont, 2009)
2 Why having cognitive agents?
2.1 For the same reason that we have agents..
2.1.1 Processes and mechanisms are the base of computation
2.1.2 Our mind is better at understanding processes than it is to understand complex math
2.1.2.1 ... I don't have support for this. Is it true?
2.1.3 However, the mind performs much better when presented with familiar terms
2.1.3.1 famous Wason 4 (1966) card-turning example
2.2 For the AI dream of reverse engineering
2.3 For simulating socially complex situations
2.3.1 Mindchanging
2.3.1.1 to correct self-harming habits
2.3.2 Complex decisions based on trust and reputation structures
2.3.2.1 design of targeted inteventions
2.3.2.1.1 For example, smoking ban in Italy and the Netherlands
2.3.3 as soon as you consider context, simple models (PD, PGG) fail to adequately represent reality
3 What software for cognitive modeling?
3.1 Traditional, single-agent cognition (thanks to J. Sabater for this part)
3.1.1 CLARION
3.1.1.1 The Clarion cognitive architecture project aims to investigate the fundamental structures of the human mind by synthesizing many intellectual ideas into a unified, coherent model of cognition. In particular, our goal is to explore the interaction of implicit and explicit cognition, emphasizing bottom-up learning (i.e., learning that involves first acquiring implicit knowledge and then acquiring explicit knowledge on its basis).
3.1.1.2 Our research is directed at forming a (generic) cognitive architecture that captures various cognitive processes with the ultimate goal of providing unified explanations for a wide range of cognitive phenomenon. The current objectives of this project are two-fold:
3.1.1.3 Developing artificial agents in certain cognitive task domains
3.1.1.4 Understanding human decision-making, learning, reasoning, motivation, and meta-cognition in other domains.
3.1.1.5 The Clarion cognitive architecture project is headed by Professor Ron Sun and has been supported by such agencies as ONR, ARI, and others.
3.1.1.6 Status of publications on the website as of Nov 14
3.1.2 SOAR
3.1.2.1 Rule-based
3.1.2.2 From Intro:
3.1.2.2.1 Soar is a general cognitive architecture for developing systems that exhibit intelligent behavior. Researchers all over the world, both from the fields of artificial intelligence and cognitive science, are using Soar for a variety of tasks. It has been in use since 1983, evolving through many different versions to where it is now Soar, Version 9.
3.1.2.2.2 We intend ultimately to enable the Soar architecture to:
3.1.2.2.2.1 work on the full range of tasks expected of an intelligent agent, from highly routine to extremely difficult, open-ended problems
3.1.2.2.2.2 represent and use appropriate forms of knowledge, such as procedural, semantic, episodic, and iconic
3.1.2.2.2.3 employ the full range of problem solving methods
3.1.2.2.2.4 interact with the outside world, and
3.1.2.2.2.5 learn about all aspects of the tasks and its performance on them.
3.1.2.2.2.6 In other words, our intention is for Soar to support all the capabilities required of a general intelligent agent.
3.1.2.2.3 In Soar, every decision is based on the current interpretation of sensory data, the contents of working memory created by prior problem solving, and any relevant knowledge retrieved from long-term memory. Decisions are never precompiled into uninterruptible sequences.
3.1.3 ACT-R
3.1.3.1 Intro:
3.1.3.1.1 ACT-R is a cognitive architecture: a theory for simulating and understanding human cognition. Researchers working on ACT-R strive to understand how people organize knowledge and produce intelligent behavior. As the research continues, ACT-R evolves ever closer into a system which can perform the full range of human cognitive tasks: capturing in great detail the way we perceive, think about, and act on the world.
3.1.3.1.2 ACT-R is a hybrid cognitive architecture. Its symbolic structure is a production system; the subsymbolic structure is represented by a set of massively parallel processes that can be summarized by a number of mathematical equations. The subsymbolic equations control many of the symbolic processes. For instance, if several productions match the state of the buffers, a subsymbolic utility equation estimates the relative cost and benefit associated with each production and decides to select for execution the production with the highest utility. Similarly, whether (or how fast) a fact can be retrieved from declarative memory depends on subsymbolic retrieval equations, which take into account the context and the history of usage of that fact. Subsymbolic mechanisms are also responsible for most learning processes in ACT-R.
3.1.3.2 Architecture
3.1.3.2.1 Pretty picture.
3.1.3.2.1.1 Planning experiments: in parallel with psychological ones
3.1.3.2.1.2 Comparison
3.1.3.2.1.3 There are two types of modules:
3.1.3.2.1.3.1 perceptual-motor modules, which take care of the interface with the real world (i.e., with a simulation of the real world), The most well-developed perceptual-motor modules in ACT-R are the visual and the manual modules.
3.1.3.2.1.3.2 memory modules.
3.1.3.2.1.3.2.1 There are two kinds of memory modules in ACT-R:
3.1.3.2.1.3.2.1.1 declarative memory , consisting of facts such as Washington, D.C. is the capital of United States, France is a country in Europe, or 2+3=5, and
3.1.3.2.1.3.2.1.2 procedural memory, made of productions. Productions represent knowledge about how we do things: for instance, knowledge about how to type the letter “Q” on a keyboard, about how to drive, or about how to perform addition.
3.2 BDI approaches (again thanks to J. Sabater)
3.2.1 based on the Belief-Desire-Intention software model that implements the principal aspects of Michael Bratman’s theory of human practical reasoning
3.2.2 the BDI model is based on “folk psychology” and was developed only as a way of explaining future-directed intention and not as a general model for cognition.
3.3 Multi-agent cognitive BDI architectures
3.3.1 Jason (thanks to F. Grimaldo for this part)
3.3.1.1 Basic ideas
3.3.1.1.1 Concurrent, multi-agent
3.3.1.1.2 PLAN-based
3.3.1.1.3 Orignal take on plan recovery
3.3.1.2 Technically..
3.3.1.2.1 Internals of the agent based on Agentspeak BDI
3.3.1.2.1.1 Beliefs represent the information available to an agent (e.g., about the environment or other agents)
3.3.1.2.1.1.1 publisher(wiley)
3.3.1.2.1.1.1.1 wiley(publisher)
3.3.1.2.1.1.2 fiume_esondato_3_novembre
3.3.1.2.1.1.3 fiume(esondato, 3 nov)
3.3.1.2.1.1.3.1 annotation
3.3.1.2.1.1.3.1.1 fiume(esondato, 3 nov)[belief=0.9, source= francesca]
3.3.1.2.1.1.3.1.2 fiume(esondato, 3 nov, belief=0.9, source= francesca)
3.3.1.2.1.2 Goals represent states of affairs the agent wants to bring about (come to believe, when goals are used declaratively)
3.3.1.2.1.2.1 • Achievement goals:
3.3.1.2.1.2.1.1 !write(book)
3.3.1.2.1.2.2 Or attempts to retrieve information from the belief base • Test goals:
3.3.1.2.1.2.2.1 ?publisher(P)
3.3.1.2.1.3 An agent reacts to events by executing plans
3.3.1.2.1.3.1 Events happen as a consequence to changes in the agent’s beliefs or goals
3.3.1.2.1.3.1.1 AgentSpeak triggering events:
3.3.1.2.1.3.1.2 • +b (belief addition)
3.3.1.2.1.3.1.3 • -b (belief deletion)
3.3.1.2.1.3.1.4 • +!g (achievement-goal addition) • -!g (achievement-goal deletion) • +?g (test-goal addition)
3.3.1.2.1.3.1.5 • -?g (test-goal deletion)
3.3.1.2.1.3.2 Plans are recipes for action, representing the agent’s know-how
3.3.1.2.1.3.2.1 An AgentSpeak plan has the following general structure:
3.3.1.2.1.3.2.1.1 triggering_event : context <- body.
3.3.1.2.1.3.2.1.1.1 +!drill_67P : not battery_charge(low) & drill_ok & current_power(P) <- drill-at-power(P). -!drill_67P : ~drill_ok <- . -!drill_67P <- +-current_power(P + 1) !drill(67P).
3.3.1.2.1.3.2.1.1.1.1 Rolling
3.3.1.2.1.3.2.1.1.1.2 Sleep well.
3.3.1.2.1.3.2.1.1.2 Exercise
3.3.1.2.1.3.2.1.1.2.1 Ingredients: location(a,b) means that object A is at location B !examine(object) is the subgoal of examining an object !at(coordinates) is the subgoal of getting the lander at location coordinates assume that a new belief enters the system,that is, that there is a green patch on a rock, which makes it worth examining: green_patch(r123)
3.3.1.2.1.3.2.1.1.2.1.1 +green_patch(Rock) : not battery_charge(low) <- ?location(Rock,Coordinates); !at(Coordinates); !examine(Rock).
3.3.1.2.1.4 .. and where is the intention? .. wait
3.3.1.2.1.4.1 Intentions are commited plans and exist at the level of the reasoning cycle
3.3.1.2.1.4.1.1 ten steps
3.3.1.2.1.4.1.1.1 1. Perceiving the Environment
3.3.1.2.1.4.1.1.2 2. Updating the Belief Base
3.3.1.2.1.4.1.1.3 3. Receiving Communication from Other Agents
3.3.1.2.1.4.1.1.4 4. Selecting ‘Socially Acceptable’ Messages
3.3.1.2.1.4.1.1.5 5. Selecting an Event
3.3.1.2.1.4.1.1.6 6. Retrieving all Relevant Plans
3.3.1.2.1.4.1.1.7 7. Determining the Applicable Plans
3.3.1.2.1.4.1.1.8 8. Selecting one Applicable Plan
3.3.1.2.1.4.1.1.9 9. Selecting an Intention for Further Execution
3.3.1.2.1.4.1.1.10 10. Executing one step of an Intention
3.3.1.2.1.4.2 figure..
3.3.1.2.1.4.2.1 Jason's reasoning cycle in pictures
3.3.1.2.2 World described in Java, but see..
3.3.1.3 The JACAMO triad
3.3.1.3.1 Jason
3.3.1.3.1.1 Agents
3.3.1.3.2 Cartago
3.3.1.3.2.1 Artefacts
3.3.1.3.3 Moise
3.3.1.3.3.1 Organizations
4 Reputation as a cognitive artefact
4.1 The theory applies to reputation about a norm.
4.1.1 On the other hand, all reputation is about a norm, in some sense. Well there is some case of reputation about skill, but even in that case, one could say this is all about the norm that says that you have to perform well (in your profession, for example)
4.2 About that norm, there are four essential roles.
4.2.1 T
4.2.2 E
4.2.3 B
4.2.4 G
4.3 Here, we explain who they are.
4.3.1 Reputation involves four sets of agents:
4.3.1.1 • a nonempty set T of agents are the targets of the evaluation
4.3.1.2 • a nonempty set E of agents who share the evaluation a nonempty set T of evaluation targets
4.3.1.3 • a nonempty set B of beneficiaries, i.e., the agents sharing the goal with regard to which the elements of T are evaluated
4.3.1.4 • a nonempty set G (gossipers) of agents (also called Third-party) who share the meta-belief that members of E share the evaluation; this is the set of all agents aware of the effect of reputation (as stated above, effect is only one component of it; awareness of the process is not implied).
4.4 Now, once individuated the sets, we wonder what are the superpositions between them. Is everyone at the same time a target and an evaluator? A target and a gossiper? Or are the roles clearly distinguished?
4.4.1 Here is the tree.
4.4.1.1 Don't expand this node!
4.4.1.1.1 TEBG
4.4.1.1.1.1 T / EBG
4.4.1.1.1.1.1 TE / BG
4.4.1.1.1.1.2 TB / EG
4.4.1.1.1.1.3 TG / EB
4.4.1.1.1.2 E / TBG
4.4.1.1.1.3 B / TEG
4.4.1.1.1.4 G / TEB
4.4.1.1.2 T / E / B / G
4.4.2 In fact (since permutations do not count) there are only 9 cases.
4.4.3 have a classification. We can even build a tree.
4.5 What are the effects of the different situations?
4.5.1 We need some hypotheses about the forces in play - what does it means to share the same group, and what it means to be in separated groups.
4.5.1.1 To build a better theory, of course, we would need to specify also what it means to be in the same group.
4.5.1.1.1 Groups are normall considered to be unite, solidal and self-helping
4.5.1.1.2 However, expections can happen - sometimes by design; consider a parliament, whose members are supposed to hold differing views on nearly all politcal matters
4.5.1.1.2.1 With the exceptions of matters "of national interest" (in the classic nation-state credo), on one hand, and on the matter of members wages, on the other hand.
4.5.1.1.3 Thus, a better theory would consider what are the goals of groups, and differentiate effects on the base of these goals. Not this one.
4.5.1.2 To keep things simple, we consider a group as one with common interests and goals, animated by solidarity , cohesion and harmony of members towards each other.
4.5.2 We examine the results in the terms of
4.5.2.1 the positive/negative bias that they produce
4.5.2.2 the amount of response they elicit (tendency to provide an answer even if uncertain vs. to remain silent)
4.5.2.3 Combining the values of such intersections gives rise to countless situations. By reducing each intersection to a binary dimension, where values are either high or low, we can describe a finite (but still too large) variety of examples. We call attention on two rather extreme situations: gossip among students, and eBay.
4.5.2.4
4.5.2.5 the higher the intersect between G and E, the higher G's commitment, and therefore their responsibility and attitude to provision. On the contrary, the overlapping between G and B (and, what is the same, between E and B) gives rise to a beneficiary-oriented benevolence, with the consequent negative bias. Instead, a higher intersect between G and T (or between E and T) leads to the leniency bias. Finally, the intersection between T and B concerns the perception of effects of gossip on targets. The higher this perception, the stronger the expected responsibility of gossipers.
4.5.2.6
4.5.2.7
4.5.2.8 Evidence produced in the relevant literature, cited in the second section of this paper, matches these expectations. As a matter of fact, a system characterised by these intersections among agent roles does not qualify as reputational, according to our analysis, but rather as a system for image formation, augmented by centralised collection and distribution. These rather extreme examples show the advantages of the model presented so far: it allows for concrete predictions to be made and tested against available evidence, concerning both real life examples and technological applications. Predictive models are much needed especially in the latter domain, where theory-driven expectations are merely economic (game-theoretic). They concern the positive effects of sellers? profiles on economic efficiency, rather than the functioning of reputation itself. Our claim is that feedback profile is less than reputation. Of course, even though a system like eBay is not a truly reputational system, it seems to be good enough as to meet market criteria (i.e., volume of transactions and level of prices). However, how healthy and stable is a market (whether electronic or traditional) characterised by feedback under-provision and overrating? More generally, what are the specific effects of overrating and underrating? To answer these questions, we will turn to artificial, simulation-based, data.
5 References
5.1 BDI and agentspeak
5.1.1 Rao, A. S. (1996). AgentSpeak(l): BDI agents speak out in a logical computable language. In Van de Velde, W. and Perram, J. W., editors, Agents Breaking Away, volume 1038 of Lecture Notes in Computer Science, pages 42-55. Springer Berlin Heidelberg.
5.1.2 Bordini, R. H., Hübner, J. F., and Wooldridge, M. (2007). Programming multi-agent systems in AgentSpeak using Jason. John Wiley & Sons.
5.2 Reputation
5.2.1 Repage
5.2.2 Group size and punishment
5.2.2.1 GiardiniMABS2014_Post-Proc02.pdf

More Maps From User