Various attempts have been made to transform pseudo-Euclidean spaces into Euclidean ones. In some cases essential information may be lost. This will be illustrated by some examples. It is assumed that readers are familiar with PRTools and will consult the following pages where needed:
The first way to arrive at a Euclidean representation from data in a PE space is by just using a part or all of the PE vector space and interpreting it as Euclidean (in fact: changing the norm):
D = chickenpieces(35,90)*makesym;
W = D*
X_pe = D*W;
X_ass = X_pe*euspace('ass');
fprintf('LOO 1NN error, original dismat: %6.3fn',testkd(D,1,'loo'))
fprintf('LOO 1NN error, associated space: %6.3fn',testk(X_ass,1))
fprintf('LOO 1NN error, positive space: %6.3fn',
fprintf('LOO 1NN error, negative space: %6.3fn',
From these numbers it can be understood that the negative space, at least for the 1NN classifier, is informative in this problem. Instead of just taking a part of the embedded space, changing its norm, one can also try to modify the given dissimilarity matrix such that it becomes better embeddable in a Euclidean space. Two well-known transformations are the addition of a constant to the off-diagonal elements and taking a sufficiently small exponent. For instance
pe_em*nef) % the original negative eigenfraction
Make plots of the
nef as a function of the size of the correction. Make corresponding plots of the LOO 1NN error. Use nne on the transformed D.
elements: datasets datafiles cells and doubles mappings classifiers mapping types.
operations:datasets datafiles cells and doubles mappings classifiers stacked parallel sequential dyadic.
user commands:datasets representation classifiers evaluation clustering examples support routines.
introductory examples:IntroductionScatterplotsDatasets Datafiles Mappings Classifiers Evaluation Learning curves Feature curves Dimension reductionCombining classifiers Dissimilarities.