If I understand correctly, you want to do supervised learning. Since your data
has high-dimensional features, you want to pre-process them with Isomap (or
some other algorithm) to improve runtime performance with a small cost in
predictive accuracy. Is that right?
There is a subtle difference between dimensionality reduction algorithms and
manifold learners. Manifold learners (such as PCA, Kernel-PCA, Nonlinear-PCA,
and Autoencoders) train a model that maps from low-to-high (or high-to-low)
dimensional space (or both). Many dimensionality reduction techniques (such as
Isomap, LLE, MLLE, Manifold Sculpting, HLLE, LTSA, and MVU) reduce
dimensionality without training a map or model of the manifold. Technically,
these techniques are not "manifold learners" (, although the published
literature is not always careful to make this distinction). If you want to
generalize (that is, apply the same transformation to a test set), then a
mapping is required.
If you wish to use a dimensionality reduction algorithm that is not a manifold
learner (such as Isomap), you can train a separate model to do the mapping. In
other words, after you reduce the dimensionality of the training features, you
can train a regression model to map from the high-dimensional features to the
low-dimensional features. You can then use your trained regression model to
apply a similar transformation to the test set.
If you would like to refer to this comment somewhere else in this project, copy and paste the following link:
Hi,
I need to have the same parameters for the test set as were generated by the
Isomap training set. How to do this? I'm not
trying to be specific about the Isomap but any of the other dimensionality
reduction transforms.
Thanks..
If I understand correctly, you want to do supervised learning. Since your data
has high-dimensional features, you want to pre-process them with Isomap (or
some other algorithm) to improve runtime performance with a small cost in
predictive accuracy. Is that right?
There is a subtle difference between dimensionality reduction algorithms and
manifold learners. Manifold learners (such as PCA, Kernel-PCA, Nonlinear-PCA,
and Autoencoders) train a model that maps from low-to-high (or high-to-low)
dimensional space (or both). Many dimensionality reduction techniques (such as
Isomap, LLE, MLLE, Manifold Sculpting, HLLE, LTSA, and MVU) reduce
dimensionality without training a map or model of the manifold. Technically,
these techniques are not "manifold learners" (, although the published
literature is not always careful to make this distinction). If you want to
generalize (that is, apply the same transformation to a test set), then a
mapping is required.
If you wish to use a dimensionality reduction algorithm that is not a manifold
learner (such as Isomap), you can train a separate model to do the mapping. In
other words, after you reduce the dimensionality of the training features, you
can train a regression model to map from the high-dimensional features to the
low-dimensional features. You can then use your trained regression model to
apply a similar transformation to the test set.