Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 3

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
Content available remote Learning Behaviors of Functions with Teams
EN
We consider the inductive inference model of Gold [15]. Suppose we are given a set of functions that are learnable with certain number of mind changes and errors. What can we consistently predict about those functions if we are allowed fewer mind changes or errors? In [20] we relaxed the notion of exact learning by considering some higher level properties of the input-output behavior of a given function. in this context, a learner produces a program that describes the property of a given function. Can we predict generic properties such as threshold or modality if we allow fewer number of mind changes or errors? These questions were completely answered in [20] when the learner is restricted to a single IIM. In this paper we allow a team of IIMs to collaborate in the learning process. The learning is considered to be successful if any one of the team member succeeds. A motivation for this extension is to understand and characterize properties that are learnable for a given set of functions in a team environment.
2
Content available remote Learning Behaviors of Functions
EN
We consider the inductive inference model of Gold [15]. Suppose we are given a set of functions that are learnable with certain number of mind changes and errors. What properties of these functions are learnable if we allow fewer number of mind changes or errors? In order to answer this question this paper extends the Inductive Inference model introduced by Gold [15]. Another motivation for this extension is to understand and characterize properties that are learnable for a given set of functions. Our extension considers a wide range of properties of function based on their input-output relationship. Two specific properties of functions are studied in this paper. The first property, which we call modality, explores how the output of a function fluctuates. For example, consider a function that predicts the price of a stock. A brokerage company buys and sells stocks very often in a day for its clients with the intent of maximizing their profit. If the company is able predict the trend of the stock market "reasonably" accurately then it is bound to be very successful. Identification criterion for this property of a function f is called PREX which predicts if f(x) is equal to, less than or greater than f(x + 1) for each x. Next, as opposed to a constant tracking by a brokerage company, an individual investor does not often track dynamic changes in stock values. Instead, the investor would like to move the investment to a less risky option when the investment exceeds or falls below certain threshold. We capture this notion using an identification criterion called TREX that essentially predicts if a function value is at, above, or below a threshold value. Conceptually,modality prediction (i.e., PREX) and threshold prediction (i.e., TREX) are "easier" than EX learning. We show that neither the number of errors nor the number of mind-changes can be reduced when we ease the learning criterion from exact learning to learning modality or threshold. We also prove that PREX and TREX are totally different properties to predict. That is, the strategy for a brokerage company may not be a good strategy for individual investor and vice versa.
3
Content available remote Capabilities of Thoughtful Machines
EN
When learning a concept the learner produces conjectures about the concept he learns. Typically the learner contemplates, performs some experiments, make observations, does some computation, thinks carefully (that is not output a new conjecture for a while) and then makes a conjecture about the (unknown) concept. Any new conjecture of an intelligent learner should be valid for at least some ``reasonable amount of time'' before some evidence is found that the conjecture is false. Then maybe the learner can further study and explore the concept more and produce a new conjecture that again will be valid for some ``reasonable amount of time''. In this paper we formalize the idea of reasonable amount of time. The learners who obey the above constraint are called ``Thoughtful learners '' (TEX learners). We show that there are classes that can be learned using the above model. We also compare this leaning paradigm to other existing ones. The surprising result is that there is no capability intervals in team TEX-type learning. On the other hand, capability intervals exist in all other models. Also these learners are orthogonal to the learners that have been studied in the literature.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.