This HTML5 document contains 39 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dcthttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
n10https://global.dbpedia.org/id/
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
wikipedia-enhttp://en.wikipedia.org/wiki/
dbchttp://dbpedia.org/resource/Category:
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
goldhttp://purl.org/linguistics/gold/
wikidatahttp://www.wikidata.org/entity/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Distribution_learning_theory
rdf:type
dbo:Software
rdfs:label
Distribution learning theory
rdfs:comment
The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, , Dana Ron, Ronitt Rubinfeld, Robert Schapire and in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view.
dct:subject
dbc:Computational_learning_theory
dbo:wikiPageID
44655565
dbo:wikiPageRevisionID
1083045386
dbo:wikiPageWikiLink
dbr:Kolmogorov–Smirnov_test dbc:Computational_learning_theory dbr:Robert_Schapire dbr:Conditional_probability_distribution dbr:Probability_distribution dbr:Total_variation dbr:S._Dasgupta dbr:Computational_learning_theory dbr:Michael_Kearns_(computer_scientist) dbr:Statistical_learning_theory dbr:Gautam_Kamath dbr:Approximation_algorithms dbr:Linda_Sellie dbr:Total_variation_distance_of_probability_measures dbr:Statistics dbr:Cluster_analysis dbr:PAC-learning dbr:Applied_probability dbr:Leslie_Valiant dbr:Constantinos_Daskalakis dbr:Ronitt_Rubinfeld dbr:Machine_learning dbr:Kullback-Leibler_divergence dbr:Yishay_Mansour dbr:Dana_Ron
owl:sameAs
n10:2MPa9 freebase:m.012gc144 wikidata:Q25048711
dbo:abstract
The distributional learning theory or learning of probability distribution is a framework in computational learning theory. It has been proposed from Michael Kearns, , Dana Ron, Ronitt Rubinfeld, Robert Schapire and in 1994 and it was inspired from the PAC-framework introduced by Leslie Valiant. In this framework the input is a number of samples drawn from a distribution that belongs to a specific class of distributions. The goal is to find an efficient algorithm that, based on these samples, determines with high probability the distribution from which the samples have been drawn. Because of its generality, this framework has been used in a large variety of different fields like machine learning, approximation algorithms, applied probability and statistics. This article explains the basic definitions, tools and results in this framework from the theory of computation point of view.
gold:hypernym
dbr:Framework
prov:wasDerivedFrom
wikipedia-en:Distribution_learning_theory?oldid=1083045386&ns=0
dbo:wikiPageLength
22325
foaf:isPrimaryTopicOf
wikipedia-en:Distribution_learning_theory