krainaksiazek regularization and learning theory 20124155
- znaleziono 2 produkty w 1 sklepie
Machine learning Books on Demand
Książki / Literatura obcojęzyczna
Source: Wikipedia. Pages: 254. Chapters: Artificial neural network, Supervised learning, Hidden Markov model, Pattern recognition, Reinforcement learning, Principal component analysis, Self-organizing map, Overfitting, Cluster analysis, Granular computing, Rough set, Mixture model, Expectation-maximization algorithm, Radial basis function network, Types of artificial neural networks, Learning to rank, Forward-backward algorithm, Perceptron, Category utility, Neural modeling fields, Dominance-based rough set approach, Principle of maximum entropy, Non-negative matrix factorization, Concept learning, K-means clustering, Structure mapping engine, Viterbi algorithm, Cross-validation, Hierarchical temporal memory, Activity recognition, Algorithmic inference, Formal concept analysis, Gradient boosting, Information bottleneck method, Nearest neighbor search, Simultaneous localization and mapping, Markov decision process, Gittins index, K-nearest neighbor algorithm, General Architecture for Text Engineering, Reasoning system, Concept drift, Uniform convergence, Conceptual clustering, Multi-armed bandit, Multilinear subspace learning, Conditional random field, DBSCAN, Feature selection, Learning with errors, Weka, Evolutionary algorithm, Iris flower data set, Binary classification, OPTICS algorithm, Partially observable Markov decision process, Constrained Conditional Models, Group method of data handling, Learning classifier system, Random forest, Statistical classification, Analogical modeling, Bregman divergence, Backpropagation, Temporal difference learning, Loss function, Curse of dimensionality, Alternating decision tree, Evolutionary multi-modal optimization, Stochastic gradient descent, Kernel principal component analysis, Explanation-based learning, K-medoids, RapidMiner, Transduction, Variable-order Markov model, Kernel adaptive filter, Classification in machine learning, Grammar induction, Sense Networks, Glivenko Cantelli theorem, Cross-entropy method, Dimension reduction, Rand index, Spiking neural network, Feature Selection Toolbox, Co-training, Multinomial logit, Computational learning theory, Local Outlier Factor, Q-learning, Gaussian process, Evolvability, Universal Robotics, Crossover, Shattering, Cluster-weighted modeling, Version space, Variable kernel density estimation, Calibration, Randomized weighted majority algorithm, Leabra, Growing self-organizing map, TD-Gammon, Prior knowledge for pattern recognition, Generative topographic map, VC dimension, ID3 algorithm, String kernel, Prefrontal Cortex Basal Ganglia Working Memory, Meta learning, Inductive transfer, Margin classifier, Active learning, Feature extraction, Regularization, IDistance, Dynamic time warping, Latent variable, Layered hidden Markov model, Empirical risk minimization, Jabberwacky, Inductive bias, Shogun, Confusion matrix, Never-Ending Language Learning, Accuracy paradox, FLAME clustering, Smart variables, Probably approximately correct learning, Hierarchical hidden Markov model, Document classification, Adjusted mutual information, Generalization error, Knowledge discovery, Quadratic classifier, Ugly duckling theorem, Bongard problem, Online machine learning, Algorithmic learning theory, Information Fuzzy Networks, Knowledge integration, Bootstrap aggregating, Early stopping, Kernel methods, Bag of words model, CIML community portal, Sequence labeling, Semi-supervised learning, Minimum redundancy feature selection, Matthews correlation coefficient, Learn...
Learning with Submodular Functions now publishers Inc
Książki / Literatura obcojęzyczna
Submodular functions are relevant to machine learning for at least two reasons: (1) some problems may be expressed directly as the optimization of submodular functions and (2) the Lovasz extension of submodular functions provides a useful set of regularization functions for supervised and unsupervised learning. In Learning with Submodular Functions: A Convex Optimization Perspective, the theory of submodular functions is presented in a self-contained way from a convex analysis perspective, presenting tight links between certain polyhedra, combinatorial optimization and convex optimization problems. In particular, it describes how submodular function minimization is equivalent to solving a wide variety of convex optimization problems. This allows the derivation of new efficient algorithms for approximate and exact submodular function minimization with theoretical guarantees and good practical performance. By listing many examples of submodular functions, it reviews various applications to machine learning, such as clustering, experimental design, sensor placement, graphical model structure learning or subset selection, as well as a family of structured sparsity-inducing norms that can be derived and used from submodular functions. Learning with Submodular Functions: A Convex Optimization Perspective is an ideal reference for researchers, scientists, or engineers with an interest in applying submodular functions to machine learning problems.
Sklepy zlokalizowane w miastach: Warszawa, Kraków, Łódź, Wrocław, Poznań, Gdańsk, Szczecin, Bydgoszcz, Lublin, Katowice
Szukaj w sklepach lub całym serwisie
t1=0.031, t2=0, t3=0, t4=0, t=0.031