About: Neural tangent kernel     Goto   Sponge   NotDistinct   Permalink

An Entity of Type : owl:Thing, within Data Space : dbpedia.demo.openlinksw.com associated with source document(s)
QRcode icon
http://dbpedia.demo.openlinksw.com/c/8fqcpSkhWX

In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods. The NTK was introduced in 2018 by Arthur Jacot, Franck Gabriel and Clément Hongler. It was implicit in contemporaneous work on overparameterization.

AttributesValues
rdfs:label
  • Kernel de tangent neural (ca)
  • Neural tangent kernel (en)
  • Kernel de tangente neural (pt)
rdfs:comment
  • In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods. The NTK was introduced in 2018 by Arthur Jacot, Franck Gabriel and Clément Hongler. It was implicit in contemporaneous work on overparameterization. (en)
  • No estudo de redes neurais artificiais (RNAs), o kernel de tangente neural (KTN) é um kernel que descreve a evolução de redes neurais artificiais profundas durante seu treinamento por gradiente descendente . Ele permite que RNAs sejam estudadas usando algoritmos do tipo Máquina de vetores de suporte. O KTN foi lançado em 2018 por Arthur Jacot, Franck Gabriel e Clément Hongler. Também estava implícito em alguns trabalhos contemporâneos. (pt)
dcterms:subject
Wikipage page ID
Wikipage revision ID
Link from a Wikipage to another Wikipage
Link from a Wikipage to an external page
sameAs
dbp:wikiPageUsesTemplate
has abstract
  • En l'estudi de les xarxes neuronals artificials (ANN), el kernel de tangent neural (amb acrònim anglès NTK) és un que descriu l'evolució de les xarxes neuronals artificials profundes durant el seu entrenament per descens de gradient. Permet estudiar les ANN utilitzant eines teòriques dels mètodes nucli. Per a les arquitectures de xarxes neuronals més comunes, en el límit de l'amplada de capa gran, l'NTK es torna constant. Això permet fer declaracions senzilles de forma tancada sobre prediccions de xarxes neuronals, dinàmiques d'entrenament, generalització i superfícies de pèrdua. Per exemple, garanteix que les ANN prou amples convergeixen a un mínim global quan s'entrenen per minimitzar una pèrdua empírica. El NTK de les xarxes d'amplada gran també està relacionat amb diversos . El NTK va ser presentat el 2018 per Arthur Jacot, Franck Gabriel i Clément Hongler. Estava implícit en treballs contemporanis sobre sobreparametrització. (ca)
  • In the study of artificial neural networks (ANNs), the neural tangent kernel (NTK) is a kernel that describes the evolution of deep artificial neural networks during their training by gradient descent. It allows ANNs to be studied using theoretical tools from kernel methods. For most common neural network architectures, in the limit of large layer width the NTK becomes constant. This enables simple closed form statements to be made about neural network predictions, training dynamics, generalization, and loss surfaces. For example, it guarantees that wide enough ANNs converge to a global minimum when trained to minimize an empirical loss. The NTK of large width networks is also related to several other large width limits of neural networks. The NTK was introduced in 2018 by Arthur Jacot, Franck Gabriel and Clément Hongler. It was implicit in contemporaneous work on overparameterization. (en)
  • No estudo de redes neurais artificiais (RNAs), o kernel de tangente neural (KTN) é um kernel que descreve a evolução de redes neurais artificiais profundas durante seu treinamento por gradiente descendente . Ele permite que RNAs sejam estudadas usando algoritmos do tipo Máquina de vetores de suporte. Para a maioria das arquiteturas de rede neural, no limite da largura da camada, o KTN se torna constante. Isso permite que declarações simples de forma fechada sejam feitas sobre previsões de rede neural, dinâmicas de treinamento, generalização e superfícies de perda. Por exemplo, ele garante que RNAs largas o suficiente convergem para um mínimo global quando treinados para minimizar uma perda empírica. O KTN de redes de grande largura também está relacionado a vários outros limites de largura de redes neurais. O KTN foi lançado em 2018 por Arthur Jacot, Franck Gabriel e Clément Hongler. Também estava implícito em alguns trabalhos contemporâneos. (pt)
prov:wasDerivedFrom
page length (characters) of wiki page
foaf:isPrimaryTopicOf
is Link from a Wikipage to another Wikipage of
is Wikipage disambiguates of
is foaf:primaryTopic of
Faceted Search & Find service v1.17_git147 as of Sep 06 2024


Alternative Linked Data Documents: ODE     Content Formats:   [cxml] [csv]     RDF   [text] [turtle] [ld+json] [rdf+json] [rdf+xml]     ODATA   [atom+xml] [odata+json]     Microdata   [microdata+json] [html]    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 08.03.3332 as of Dec 5 2024, on Linux (x86_64-generic-linux-glibc212), Single-Server Edition (378 GB total memory, 58 GB memory in use)
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2025 OpenLink Software