Agent types

What is an agent? A primary definition would say that an agent is an autonomous software program that can take decisions based on the given inputs.

This means that an agent has a somewhat "intelligence". This intelligence is disputed, however. In nature, a system also takes actions according to given stimuli. According to the 3rd classical mechanic law, if one applies a force to a system, the system responds with an equal force back. This is a specific law of a more general idea: any system that is disturbed from its current balance responds in order to get the balance back.

In the same idea, a reactive agent is an agent that waits for environment changes. The agent gets the input and then, using a rule based system or any other possible implementation, picks an action that it must take.

A proactive agent is an agent that takes an action independent on environment changes, e.g. it takes initiative in the system.

Possible worlds of belief

What are possible worlds of belief?

Let's say that we have a fact X. We may know that X is true. But we might also now that X will be true at a time t in the future. Also X may be true at a time in the future. X is believed to be true.

All this statements can be implemented in logic using possible worlds approach. In a certain world John lives in America. In another world John lives in Africa. Facts can have different truth values, but all worlds may connect to each other.

If fact X is true in all worlds, then we can conclude that X is true.

Such a statement is called a tautology.

In Kripke semantics for possible worlds belief, every world has possible accessible worlds. For example if John lives in America in world 1, then if he believes that is possible that he lives in Africa, then world 2 is accessible from his world, world 1. We can write this as world 1 -> world 2.

Further we can discuss links between more worlds and problems of accessing a world starting from a not directly accessible world. In such cases, there are chains of beliefs that can be modeled.

Properties of the accessibility relation:

1. Reflexivity: w -> w . This means any world is accessible from itself.

2. Transitivity: If there is w1->w2->w3 then we can assume we have w1->w3.

One interesting consequence of the properties, if the model accepts them, is introspection. If John believes X, then John believes that "John believes X" . This is part of Doxastic logic.

K-nearest neighbor classification algorithm

K-nearest neighbour is a classic, simple and well known classification algorithm. It''s a supervised learning algorithm.

Let's suppose we have a Q count set of points in our n-dimensional space where all items are placed. In this space, distance between points (elements) that we need to cluster is measured using different distance metrics.

If we have a new element E which we need to classify, the algorithm uses the following idea: count the K nearest neighbors and assign the new point to the majority.

In case of equality, we can either: give up furthest away element so equality is broken or choose with a evaluation function which class the new point will belong to ( preferential class for example ), or another solution implementation dependent.

The algorithm is very simple, and it has good results in practice. However it is sensible to outliers, and very far away points from the training set are not classified very well.

Of course K can variate according to data size, computation power, time needed for classification or more criteria.

Truth Maintainance System

In logics, truth can "change" over time. What I mean with this concept :

Obama is the president of the United States.

In time the truth value of this sentence might change. In 2056 we will probably have it false. During the life of our logic reasoning facts can change from true to false and so forth. The purpose of a Truth Maintainance System is too keep track of the truth values of the sentences, complying with the fact that facts are generated from already existing facts that might change truth value.

For example we know that

P

and

P->Q

In this case we can add to the knowledge base the fact

Q

because P is true and we know that P->Q.

However if later on we need to retract P and assert ~P (eg. P is now false), the truth value for Q is wrong, because it was based on the assertion of P. In this case we have either to rollback all statements that were asserted after asserting P, or recheck them. A truth maintainance system ensures all this.

A simple method of keeping tracks of true and untrue values is keeping a list of IN and OUT facts. Each fact has a list of facts that have to be IN so that the fact itself is IN. This is the IN list. Also each fact has to have a list of facts that are OUT so that the fact is IN. This is called the OUT list.

This method creates a graph of facts each dependent on others, and it''s easy to propagate modifications from leafs ( that are basic assumptions) to the end of the tree to see what states changed.

For example in our P->Q statement, the node is Q, and it''s IN list is P. The OUT list is empty. If P becomes OUT, then Q becomes OUT as well. If we had a statement like P ^ ~R -> Q then the IN list was P and the OUT list was R. Any change in this facts would change Q as well.

Get more Joomla!® Templates and Joomla!® Forms From Crosstec